2026-03-23 00:00:12.824499 | Job console starting 2026-03-23 00:00:12.859127 | Updating git repos 2026-03-23 00:00:13.019082 | Cloning repos into workspace 2026-03-23 00:00:13.437929 | Restoring repo states 2026-03-23 00:00:13.535959 | Merging changes 2026-03-23 00:00:13.535986 | Checking out repos 2026-03-23 00:00:14.253923 | Preparing playbooks 2026-03-23 00:00:15.480330 | Running Ansible setup 2026-03-23 00:00:23.512486 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-23 00:00:25.842802 | 2026-03-23 00:00:25.844261 | PLAY [Base pre] 2026-03-23 00:00:25.895506 | 2026-03-23 00:00:25.895654 | TASK [Setup log path fact] 2026-03-23 00:00:25.947175 | orchestrator | ok 2026-03-23 00:00:25.988585 | 2026-03-23 00:00:25.988767 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-23 00:00:26.051016 | orchestrator | ok 2026-03-23 00:00:26.071756 | 2026-03-23 00:00:26.071889 | TASK [emit-job-header : Print job information] 2026-03-23 00:00:26.161098 | # Job Information 2026-03-23 00:00:26.161255 | Ansible Version: 2.16.14 2026-03-23 00:00:26.161284 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-23 00:00:26.161313 | Pipeline: periodic-midnight 2026-03-23 00:00:26.161333 | Executor: 521e9411259a 2026-03-23 00:00:26.161350 | Triggered by: https://github.com/osism/testbed 2026-03-23 00:00:26.161368 | Event ID: e0f246260a7c409a90dea29847467315 2026-03-23 00:00:26.172408 | 2026-03-23 00:00:26.172523 | LOOP [emit-job-header : Print node information] 2026-03-23 00:00:26.401439 | orchestrator | ok: 2026-03-23 00:00:26.401611 | orchestrator | # Node Information 2026-03-23 00:00:26.401642 | orchestrator | Inventory Hostname: orchestrator 2026-03-23 00:00:26.401663 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-23 00:00:26.401681 | orchestrator | Username: zuul-testbed02 2026-03-23 00:00:26.401698 | orchestrator | Distro: Debian 12.13 2026-03-23 00:00:26.401718 | orchestrator | Provider: static-testbed 2026-03-23 00:00:26.401735 | orchestrator | Region: 2026-03-23 00:00:26.401753 | orchestrator | Label: testbed-orchestrator 2026-03-23 00:00:26.401769 | orchestrator | Product Name: OpenStack Nova 2026-03-23 00:00:26.401785 | orchestrator | Interface IP: 81.163.193.140 2026-03-23 00:00:26.418770 | 2026-03-23 00:00:26.429604 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-23 00:00:27.745322 | orchestrator -> localhost | changed 2026-03-23 00:00:27.758924 | 2026-03-23 00:00:27.759056 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-23 00:00:31.420038 | orchestrator -> localhost | changed 2026-03-23 00:00:31.442797 | 2026-03-23 00:00:31.442966 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-23 00:00:32.401304 | orchestrator -> localhost | ok 2026-03-23 00:00:32.407386 | 2026-03-23 00:00:32.407493 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-23 00:00:32.462694 | orchestrator | ok 2026-03-23 00:00:32.631094 | orchestrator | included: /var/lib/zuul/builds/c4227f186e3748eea4a17831dc9e109f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-23 00:00:32.672108 | 2026-03-23 00:00:32.672229 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-23 00:00:35.971564 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-23 00:00:35.971809 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/c4227f186e3748eea4a17831dc9e109f/work/c4227f186e3748eea4a17831dc9e109f_id_rsa 2026-03-23 00:00:35.971849 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/c4227f186e3748eea4a17831dc9e109f/work/c4227f186e3748eea4a17831dc9e109f_id_rsa.pub 2026-03-23 00:00:35.971876 | orchestrator -> localhost | The key fingerprint is: 2026-03-23 00:00:35.971904 | orchestrator -> localhost | SHA256:9Npv/HDhpbppzKE9yT1pd/Zo/SYjESkSweaa4dbTXBY zuul-build-sshkey 2026-03-23 00:00:35.971927 | orchestrator -> localhost | The key's randomart image is: 2026-03-23 00:00:35.971962 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-23 00:00:35.971984 | orchestrator -> localhost | | ... | 2026-03-23 00:00:35.972006 | orchestrator -> localhost | | + E | 2026-03-23 00:00:35.972026 | orchestrator -> localhost | | o.. o | 2026-03-23 00:00:35.972045 | orchestrator -> localhost | | ..o.. = | 2026-03-23 00:00:35.972064 | orchestrator -> localhost | | . =S+.+ .. .| 2026-03-23 00:00:35.972089 | orchestrator -> localhost | | = ooo o. + | 2026-03-23 00:00:35.972109 | orchestrator -> localhost | | . ...B.=+o | 2026-03-23 00:00:35.972130 | orchestrator -> localhost | | ..%=O.*| 2026-03-23 00:00:35.972150 | orchestrator -> localhost | | o=BoB*| 2026-03-23 00:00:35.972170 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-23 00:00:35.972224 | orchestrator -> localhost | ok: Runtime: 0:00:01.926334 2026-03-23 00:00:35.981466 | 2026-03-23 00:00:35.981596 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-23 00:00:36.037252 | orchestrator | ok 2026-03-23 00:00:36.075666 | orchestrator | included: /var/lib/zuul/builds/c4227f186e3748eea4a17831dc9e109f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-23 00:00:36.090442 | 2026-03-23 00:00:36.090553 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-23 00:00:36.132975 | orchestrator | skipping: Conditional result was False 2026-03-23 00:00:36.141312 | 2026-03-23 00:00:36.141425 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-23 00:00:37.147500 | orchestrator | changed 2026-03-23 00:00:37.160157 | 2026-03-23 00:00:37.160275 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-23 00:00:37.488068 | orchestrator | ok 2026-03-23 00:00:37.497955 | 2026-03-23 00:00:37.498064 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-23 00:00:37.975615 | orchestrator | ok 2026-03-23 00:00:37.985369 | 2026-03-23 00:00:37.985464 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-23 00:00:38.492317 | orchestrator | ok 2026-03-23 00:00:38.499740 | 2026-03-23 00:00:38.499818 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-23 00:00:38.583290 | orchestrator | skipping: Conditional result was False 2026-03-23 00:00:38.589126 | 2026-03-23 00:00:38.589212 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-23 00:00:39.805623 | orchestrator -> localhost | changed 2026-03-23 00:00:39.816611 | 2026-03-23 00:00:39.816695 | TASK [add-build-sshkey : Add back temp key] 2026-03-23 00:00:41.131874 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/c4227f186e3748eea4a17831dc9e109f/work/c4227f186e3748eea4a17831dc9e109f_id_rsa (zuul-build-sshkey) 2026-03-23 00:00:41.132089 | orchestrator -> localhost | ok: Runtime: 0:00:00.043951 2026-03-23 00:00:41.139148 | 2026-03-23 00:00:41.139240 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-23 00:00:41.847520 | orchestrator | ok 2026-03-23 00:00:41.853463 | 2026-03-23 00:00:41.853562 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-23 00:00:41.909413 | orchestrator | skipping: Conditional result was False 2026-03-23 00:00:42.108038 | 2026-03-23 00:00:42.108173 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-23 00:00:42.689302 | orchestrator | ok 2026-03-23 00:00:42.765572 | 2026-03-23 00:00:42.765719 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-23 00:00:42.828896 | orchestrator | ok 2026-03-23 00:00:42.848879 | 2026-03-23 00:00:42.849004 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-23 00:00:43.520847 | orchestrator -> localhost | ok 2026-03-23 00:00:43.528353 | 2026-03-23 00:00:43.528462 | TASK [validate-host : Collect information about the host] 2026-03-23 00:00:45.318218 | orchestrator | ok 2026-03-23 00:00:45.363865 | 2026-03-23 00:00:45.364014 | TASK [validate-host : Sanitize hostname] 2026-03-23 00:00:45.541620 | orchestrator | ok 2026-03-23 00:00:45.550967 | 2026-03-23 00:00:45.551094 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-23 00:00:47.220017 | orchestrator -> localhost | changed 2026-03-23 00:00:47.226283 | 2026-03-23 00:00:47.226382 | TASK [validate-host : Collect information about zuul worker] 2026-03-23 00:00:48.025976 | orchestrator | ok 2026-03-23 00:00:48.031364 | 2026-03-23 00:00:48.031475 | TASK [validate-host : Write out all zuul information for each host] 2026-03-23 00:00:49.660419 | orchestrator -> localhost | changed 2026-03-23 00:00:49.679312 | 2026-03-23 00:00:49.679417 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-23 00:00:49.976408 | orchestrator | ok 2026-03-23 00:00:49.995267 | 2026-03-23 00:00:49.995402 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-23 00:02:13.793233 | orchestrator | changed: 2026-03-23 00:02:13.793713 | orchestrator | .d..t...... src/ 2026-03-23 00:02:13.793772 | orchestrator | .d..t...... src/github.com/ 2026-03-23 00:02:13.793806 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-23 00:02:13.793835 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-23 00:02:13.793864 | orchestrator | RedHat.yml 2026-03-23 00:02:13.810325 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-23 00:02:13.810343 | orchestrator | RedHat.yml 2026-03-23 00:02:13.810397 | orchestrator | = 1.53.0"... 2026-03-23 00:02:25.152716 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-23 00:02:25.489157 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-23 00:02:26.183309 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-23 00:02:26.247059 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-23 00:02:26.767165 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-23 00:02:26.829182 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-23 00:02:27.584464 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-23 00:02:27.584541 | orchestrator | 2026-03-23 00:02:27.584548 | orchestrator | Providers are signed by their developers. 2026-03-23 00:02:27.584553 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-23 00:02:27.584559 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-23 00:02:27.584566 | orchestrator | 2026-03-23 00:02:27.584570 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-23 00:02:27.584583 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-23 00:02:27.584588 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-23 00:02:27.584592 | orchestrator | you run "tofu init" in the future. 2026-03-23 00:02:27.584922 | orchestrator | 2026-03-23 00:02:27.584931 | orchestrator | OpenTofu has been successfully initialized! 2026-03-23 00:02:27.584940 | orchestrator | 2026-03-23 00:02:27.584944 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-23 00:02:27.584948 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-23 00:02:27.584952 | orchestrator | should now work. 2026-03-23 00:02:27.584956 | orchestrator | 2026-03-23 00:02:27.584960 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-23 00:02:27.584964 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-23 00:02:27.584969 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-23 00:02:27.848126 | orchestrator | Created and switched to workspace "ci"! 2026-03-23 00:02:27.848247 | orchestrator | 2026-03-23 00:02:27.848256 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-23 00:02:27.848262 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-23 00:02:27.848283 | orchestrator | for this configuration. 2026-03-23 00:02:27.982107 | orchestrator | ci.auto.tfvars 2026-03-23 00:02:27.990325 | orchestrator | default_custom.tf 2026-03-23 00:02:29.042629 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-23 00:02:29.586227 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-23 00:02:29.844679 | orchestrator | 2026-03-23 00:02:29.844750 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-23 00:02:29.844757 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-23 00:02:29.844762 | orchestrator | + create 2026-03-23 00:02:29.844767 | orchestrator | <= read (data resources) 2026-03-23 00:02:29.844781 | orchestrator | 2026-03-23 00:02:29.844785 | orchestrator | OpenTofu will perform the following actions: 2026-03-23 00:02:29.844790 | orchestrator | 2026-03-23 00:02:29.844810 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-23 00:02:29.844818 | orchestrator | # (config refers to values not yet known) 2026-03-23 00:02:29.844822 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-23 00:02:29.844826 | orchestrator | + checksum = (known after apply) 2026-03-23 00:02:29.844831 | orchestrator | + created_at = (known after apply) 2026-03-23 00:02:29.844835 | orchestrator | + file = (known after apply) 2026-03-23 00:02:29.844839 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.844860 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.844864 | orchestrator | + min_disk_gb = (known after apply) 2026-03-23 00:02:29.844868 | orchestrator | + min_ram_mb = (known after apply) 2026-03-23 00:02:29.844872 | orchestrator | + most_recent = true 2026-03-23 00:02:29.844876 | orchestrator | + name = (known after apply) 2026-03-23 00:02:29.844880 | orchestrator | + protected = (known after apply) 2026-03-23 00:02:29.844884 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.844890 | orchestrator | + schema = (known after apply) 2026-03-23 00:02:29.844894 | orchestrator | + size_bytes = (known after apply) 2026-03-23 00:02:29.844898 | orchestrator | + tags = (known after apply) 2026-03-23 00:02:29.844902 | orchestrator | + updated_at = (known after apply) 2026-03-23 00:02:29.844906 | orchestrator | } 2026-03-23 00:02:29.844917 | orchestrator | 2026-03-23 00:02:29.844921 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-23 00:02:29.844925 | orchestrator | # (config refers to values not yet known) 2026-03-23 00:02:29.844929 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-23 00:02:29.844933 | orchestrator | + checksum = (known after apply) 2026-03-23 00:02:29.844937 | orchestrator | + created_at = (known after apply) 2026-03-23 00:02:29.844941 | orchestrator | + file = (known after apply) 2026-03-23 00:02:29.844944 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.844948 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.844952 | orchestrator | + min_disk_gb = (known after apply) 2026-03-23 00:02:29.844956 | orchestrator | + min_ram_mb = (known after apply) 2026-03-23 00:02:29.844959 | orchestrator | + most_recent = true 2026-03-23 00:02:29.844963 | orchestrator | + name = (known after apply) 2026-03-23 00:02:29.844967 | orchestrator | + protected = (known after apply) 2026-03-23 00:02:29.844970 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.844974 | orchestrator | + schema = (known after apply) 2026-03-23 00:02:29.844978 | orchestrator | + size_bytes = (known after apply) 2026-03-23 00:02:29.844981 | orchestrator | + tags = (known after apply) 2026-03-23 00:02:29.844985 | orchestrator | + updated_at = (known after apply) 2026-03-23 00:02:29.844989 | orchestrator | } 2026-03-23 00:02:29.844993 | orchestrator | 2026-03-23 00:02:29.844996 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-23 00:02:29.845000 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-23 00:02:29.845004 | orchestrator | + content = (known after apply) 2026-03-23 00:02:29.845008 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-23 00:02:29.845012 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-23 00:02:29.845016 | orchestrator | + content_md5 = (known after apply) 2026-03-23 00:02:29.845019 | orchestrator | + content_sha1 = (known after apply) 2026-03-23 00:02:29.845023 | orchestrator | + content_sha256 = (known after apply) 2026-03-23 00:02:29.845027 | orchestrator | + content_sha512 = (known after apply) 2026-03-23 00:02:29.845030 | orchestrator | + directory_permission = "0777" 2026-03-23 00:02:29.845034 | orchestrator | + file_permission = "0644" 2026-03-23 00:02:29.845038 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-23 00:02:29.845042 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.845045 | orchestrator | } 2026-03-23 00:02:29.845049 | orchestrator | 2026-03-23 00:02:29.845053 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-23 00:02:29.845057 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-23 00:02:29.845061 | orchestrator | + content = (known after apply) 2026-03-23 00:02:29.845064 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-23 00:02:29.845068 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-23 00:02:29.845072 | orchestrator | + content_md5 = (known after apply) 2026-03-23 00:02:29.845075 | orchestrator | + content_sha1 = (known after apply) 2026-03-23 00:02:29.845079 | orchestrator | + content_sha256 = (known after apply) 2026-03-23 00:02:29.845087 | orchestrator | + content_sha512 = (known after apply) 2026-03-23 00:02:29.845091 | orchestrator | + directory_permission = "0777" 2026-03-23 00:02:29.845095 | orchestrator | + file_permission = "0644" 2026-03-23 00:02:29.845103 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-23 00:02:29.845107 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.845111 | orchestrator | } 2026-03-23 00:02:29.845114 | orchestrator | 2026-03-23 00:02:29.845118 | orchestrator | # local_file.inventory will be created 2026-03-23 00:02:29.845122 | orchestrator | + resource "local_file" "inventory" { 2026-03-23 00:02:29.845126 | orchestrator | + content = (known after apply) 2026-03-23 00:02:29.845129 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-23 00:02:29.845133 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-23 00:02:29.845137 | orchestrator | + content_md5 = (known after apply) 2026-03-23 00:02:29.845141 | orchestrator | + content_sha1 = (known after apply) 2026-03-23 00:02:29.845145 | orchestrator | + content_sha256 = (known after apply) 2026-03-23 00:02:29.845148 | orchestrator | + content_sha512 = (known after apply) 2026-03-23 00:02:29.845152 | orchestrator | + directory_permission = "0777" 2026-03-23 00:02:29.845156 | orchestrator | + file_permission = "0644" 2026-03-23 00:02:29.845160 | orchestrator | + filename = "inventory.ci" 2026-03-23 00:02:29.845164 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.845167 | orchestrator | } 2026-03-23 00:02:29.845171 | orchestrator | 2026-03-23 00:02:29.845175 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-23 00:02:29.845179 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-23 00:02:29.845182 | orchestrator | + content = (sensitive value) 2026-03-23 00:02:29.845186 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-23 00:02:29.845190 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-23 00:02:29.845194 | orchestrator | + content_md5 = (known after apply) 2026-03-23 00:02:29.845197 | orchestrator | + content_sha1 = (known after apply) 2026-03-23 00:02:29.845201 | orchestrator | + content_sha256 = (known after apply) 2026-03-23 00:02:29.845205 | orchestrator | + content_sha512 = (known after apply) 2026-03-23 00:02:29.845209 | orchestrator | + directory_permission = "0700" 2026-03-23 00:02:29.845212 | orchestrator | + file_permission = "0600" 2026-03-23 00:02:29.845216 | orchestrator | + filename = ".id_rsa.ci" 2026-03-23 00:02:29.845220 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.845224 | orchestrator | } 2026-03-23 00:02:29.845228 | orchestrator | 2026-03-23 00:02:29.845232 | orchestrator | # null_resource.node_semaphore will be created 2026-03-23 00:02:29.845235 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-23 00:02:29.845239 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.845243 | orchestrator | } 2026-03-23 00:02:29.845249 | orchestrator | 2026-03-23 00:02:29.845253 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-23 00:02:29.845256 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-23 00:02:29.845260 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.845264 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.845268 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.845276 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.845280 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.845283 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-23 00:02:29.845287 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.845291 | orchestrator | + size = 80 2026-03-23 00:02:29.845295 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.845298 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.845302 | orchestrator | } 2026-03-23 00:02:29.845306 | orchestrator | 2026-03-23 00:02:29.845310 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-23 00:02:29.845314 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-23 00:02:29.845317 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.845321 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.845325 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.845332 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.845336 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.845340 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-23 00:02:29.845343 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.845347 | orchestrator | + size = 80 2026-03-23 00:02:29.845351 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.845355 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.845358 | orchestrator | } 2026-03-23 00:02:29.845362 | orchestrator | 2026-03-23 00:02:29.845366 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-23 00:02:29.845370 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-23 00:02:29.845373 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.845377 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.845381 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.845385 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.845388 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.845392 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-23 00:02:29.845396 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.845400 | orchestrator | + size = 80 2026-03-23 00:02:29.845403 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.845407 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.845411 | orchestrator | } 2026-03-23 00:02:29.845414 | orchestrator | 2026-03-23 00:02:29.845418 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-23 00:02:29.845422 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-23 00:02:29.845426 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.845430 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.845433 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.845437 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.845441 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.845445 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-23 00:02:29.845448 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.845452 | orchestrator | + size = 80 2026-03-23 00:02:29.845458 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.845462 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.845466 | orchestrator | } 2026-03-23 00:02:29.850394 | orchestrator | 2026-03-23 00:02:29.850443 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-23 00:02:29.850449 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-23 00:02:29.850454 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.850458 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.850462 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.850467 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.850471 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.850475 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-23 00:02:29.850479 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.850482 | orchestrator | + size = 80 2026-03-23 00:02:29.850486 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.850490 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.850494 | orchestrator | } 2026-03-23 00:02:29.850504 | orchestrator | 2026-03-23 00:02:29.850508 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-23 00:02:29.850512 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-23 00:02:29.850516 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.850520 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.850524 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.850539 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.850543 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.850547 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-23 00:02:29.850552 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.850555 | orchestrator | + size = 80 2026-03-23 00:02:29.850559 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.850563 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.850567 | orchestrator | } 2026-03-23 00:02:29.850570 | orchestrator | 2026-03-23 00:02:29.850574 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-23 00:02:29.850578 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-23 00:02:29.850582 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.850585 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.850589 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.850593 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.850597 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.850600 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-23 00:02:29.850604 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.850608 | orchestrator | + size = 80 2026-03-23 00:02:29.850612 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.850615 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.850619 | orchestrator | } 2026-03-23 00:02:29.850625 | orchestrator | 2026-03-23 00:02:29.850629 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-23 00:02:29.850635 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-23 00:02:29.850639 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.850643 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.850647 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.850650 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.850655 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-23 00:02:29.850658 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.850662 | orchestrator | + size = 20 2026-03-23 00:02:29.850666 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.850670 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.850674 | orchestrator | } 2026-03-23 00:02:29.850678 | orchestrator | 2026-03-23 00:02:29.850681 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-23 00:02:29.850685 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-23 00:02:29.850689 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.850693 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.850696 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.850700 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.850704 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-23 00:02:29.850708 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.850711 | orchestrator | + size = 20 2026-03-23 00:02:29.850715 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.850719 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.850723 | orchestrator | } 2026-03-23 00:02:29.850726 | orchestrator | 2026-03-23 00:02:29.850732 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-23 00:02:29.850736 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-23 00:02:29.850740 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.850744 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.850748 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.850751 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.850755 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-23 00:02:29.850759 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.850768 | orchestrator | + size = 20 2026-03-23 00:02:29.850772 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.850775 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.850779 | orchestrator | } 2026-03-23 00:02:29.850783 | orchestrator | 2026-03-23 00:02:29.850787 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-23 00:02:29.850790 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-23 00:02:29.850816 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.850822 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.850828 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.850840 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.850846 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-23 00:02:29.850852 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.850858 | orchestrator | + size = 20 2026-03-23 00:02:29.850864 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.850869 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.850873 | orchestrator | } 2026-03-23 00:02:29.850877 | orchestrator | 2026-03-23 00:02:29.850880 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-23 00:02:29.850884 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-23 00:02:29.850888 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.850891 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.850895 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.850899 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.850903 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-23 00:02:29.850906 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.850910 | orchestrator | + size = 20 2026-03-23 00:02:29.850914 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.850918 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.850921 | orchestrator | } 2026-03-23 00:02:29.850927 | orchestrator | 2026-03-23 00:02:29.850931 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-23 00:02:29.850935 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-23 00:02:29.850938 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.850942 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.850946 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.850949 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.850953 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-23 00:02:29.850957 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.850960 | orchestrator | + size = 20 2026-03-23 00:02:29.850964 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.850968 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.850971 | orchestrator | } 2026-03-23 00:02:29.850975 | orchestrator | 2026-03-23 00:02:29.850979 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-23 00:02:29.850982 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-23 00:02:29.850986 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.850990 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.850994 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.850997 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.851001 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-23 00:02:29.851005 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.851008 | orchestrator | + size = 20 2026-03-23 00:02:29.851012 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.851016 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.851019 | orchestrator | } 2026-03-23 00:02:29.851023 | orchestrator | 2026-03-23 00:02:29.851027 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-23 00:02:29.851031 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-23 00:02:29.851038 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.851042 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.851046 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.851049 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.851053 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-23 00:02:29.851057 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.851061 | orchestrator | + size = 20 2026-03-23 00:02:29.851064 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.851068 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.851072 | orchestrator | } 2026-03-23 00:02:29.851077 | orchestrator | 2026-03-23 00:02:29.851081 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-23 00:02:29.851085 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-23 00:02:29.851088 | orchestrator | + attachment = (known after apply) 2026-03-23 00:02:29.851092 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.851096 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.851100 | orchestrator | + metadata = (known after apply) 2026-03-23 00:02:29.851104 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-23 00:02:29.851107 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.851111 | orchestrator | + size = 20 2026-03-23 00:02:29.851115 | orchestrator | + volume_retype_policy = "never" 2026-03-23 00:02:29.851119 | orchestrator | + volume_type = "ssd" 2026-03-23 00:02:29.851123 | orchestrator | } 2026-03-23 00:02:29.851128 | orchestrator | 2026-03-23 00:02:29.851132 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-23 00:02:29.851136 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-23 00:02:29.851139 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-23 00:02:29.851143 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-23 00:02:29.851147 | orchestrator | + all_metadata = (known after apply) 2026-03-23 00:02:29.851151 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.851154 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.851158 | orchestrator | + config_drive = true 2026-03-23 00:02:29.851165 | orchestrator | + created = (known after apply) 2026-03-23 00:02:29.851169 | orchestrator | + flavor_id = (known after apply) 2026-03-23 00:02:29.851172 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-23 00:02:29.851176 | orchestrator | + force_delete = false 2026-03-23 00:02:29.851180 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-23 00:02:29.851184 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.851187 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.851191 | orchestrator | + image_name = (known after apply) 2026-03-23 00:02:29.851195 | orchestrator | + key_pair = "testbed" 2026-03-23 00:02:29.851199 | orchestrator | + name = "testbed-manager" 2026-03-23 00:02:29.851203 | orchestrator | + power_state = "active" 2026-03-23 00:02:29.851206 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.851210 | orchestrator | + security_groups = (known after apply) 2026-03-23 00:02:29.851214 | orchestrator | + stop_before_destroy = false 2026-03-23 00:02:29.851218 | orchestrator | + updated = (known after apply) 2026-03-23 00:02:29.851222 | orchestrator | + user_data = (sensitive value) 2026-03-23 00:02:29.851225 | orchestrator | 2026-03-23 00:02:29.851229 | orchestrator | + block_device { 2026-03-23 00:02:29.851233 | orchestrator | + boot_index = 0 2026-03-23 00:02:29.851237 | orchestrator | + delete_on_termination = false 2026-03-23 00:02:29.851241 | orchestrator | + destination_type = "volume" 2026-03-23 00:02:29.851245 | orchestrator | + multiattach = false 2026-03-23 00:02:29.851248 | orchestrator | + source_type = "volume" 2026-03-23 00:02:29.851252 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.851260 | orchestrator | } 2026-03-23 00:02:29.851264 | orchestrator | 2026-03-23 00:02:29.851267 | orchestrator | + network { 2026-03-23 00:02:29.851271 | orchestrator | + access_network = false 2026-03-23 00:02:29.851275 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-23 00:02:29.851279 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-23 00:02:29.851283 | orchestrator | + mac = (known after apply) 2026-03-23 00:02:29.851286 | orchestrator | + name = (known after apply) 2026-03-23 00:02:29.851290 | orchestrator | + port = (known after apply) 2026-03-23 00:02:29.851294 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.851298 | orchestrator | } 2026-03-23 00:02:29.851302 | orchestrator | } 2026-03-23 00:02:29.851307 | orchestrator | 2026-03-23 00:02:29.851311 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-23 00:02:29.851315 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-23 00:02:29.851319 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-23 00:02:29.851322 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-23 00:02:29.851326 | orchestrator | + all_metadata = (known after apply) 2026-03-23 00:02:29.851330 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.851334 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.851337 | orchestrator | + config_drive = true 2026-03-23 00:02:29.851341 | orchestrator | + created = (known after apply) 2026-03-23 00:02:29.851345 | orchestrator | + flavor_id = (known after apply) 2026-03-23 00:02:29.851349 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-23 00:02:29.851352 | orchestrator | + force_delete = false 2026-03-23 00:02:29.851356 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-23 00:02:29.851360 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.851364 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.851368 | orchestrator | + image_name = (known after apply) 2026-03-23 00:02:29.851371 | orchestrator | + key_pair = "testbed" 2026-03-23 00:02:29.851375 | orchestrator | + name = "testbed-node-0" 2026-03-23 00:02:29.851379 | orchestrator | + power_state = "active" 2026-03-23 00:02:29.851383 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.851387 | orchestrator | + security_groups = (known after apply) 2026-03-23 00:02:29.851390 | orchestrator | + stop_before_destroy = false 2026-03-23 00:02:29.851394 | orchestrator | + updated = (known after apply) 2026-03-23 00:02:29.851398 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-23 00:02:29.851402 | orchestrator | 2026-03-23 00:02:29.851406 | orchestrator | + block_device { 2026-03-23 00:02:29.851409 | orchestrator | + boot_index = 0 2026-03-23 00:02:29.851413 | orchestrator | + delete_on_termination = false 2026-03-23 00:02:29.851417 | orchestrator | + destination_type = "volume" 2026-03-23 00:02:29.851421 | orchestrator | + multiattach = false 2026-03-23 00:02:29.851424 | orchestrator | + source_type = "volume" 2026-03-23 00:02:29.851428 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.851432 | orchestrator | } 2026-03-23 00:02:29.851436 | orchestrator | 2026-03-23 00:02:29.851440 | orchestrator | + network { 2026-03-23 00:02:29.851444 | orchestrator | + access_network = false 2026-03-23 00:02:29.851447 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-23 00:02:29.851451 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-23 00:02:29.851455 | orchestrator | + mac = (known after apply) 2026-03-23 00:02:29.851459 | orchestrator | + name = (known after apply) 2026-03-23 00:02:29.851463 | orchestrator | + port = (known after apply) 2026-03-23 00:02:29.851466 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.851470 | orchestrator | } 2026-03-23 00:02:29.851474 | orchestrator | } 2026-03-23 00:02:29.851479 | orchestrator | 2026-03-23 00:02:29.851483 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-23 00:02:29.851487 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-23 00:02:29.851491 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-23 00:02:29.851498 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-23 00:02:29.851502 | orchestrator | + all_metadata = (known after apply) 2026-03-23 00:02:29.851506 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.851509 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.851513 | orchestrator | + config_drive = true 2026-03-23 00:02:29.851517 | orchestrator | + created = (known after apply) 2026-03-23 00:02:29.851521 | orchestrator | + flavor_id = (known after apply) 2026-03-23 00:02:29.851524 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-23 00:02:29.851528 | orchestrator | + force_delete = false 2026-03-23 00:02:29.851532 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-23 00:02:29.851536 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.851540 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.851543 | orchestrator | + image_name = (known after apply) 2026-03-23 00:02:29.851547 | orchestrator | + key_pair = "testbed" 2026-03-23 00:02:29.851551 | orchestrator | + name = "testbed-node-1" 2026-03-23 00:02:29.851555 | orchestrator | + power_state = "active" 2026-03-23 00:02:29.851558 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.851562 | orchestrator | + security_groups = (known after apply) 2026-03-23 00:02:29.851566 | orchestrator | + stop_before_destroy = false 2026-03-23 00:02:29.851570 | orchestrator | + updated = (known after apply) 2026-03-23 00:02:29.851576 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-23 00:02:29.851580 | orchestrator | 2026-03-23 00:02:29.851584 | orchestrator | + block_device { 2026-03-23 00:02:29.851587 | orchestrator | + boot_index = 0 2026-03-23 00:02:29.851591 | orchestrator | + delete_on_termination = false 2026-03-23 00:02:29.851595 | orchestrator | + destination_type = "volume" 2026-03-23 00:02:29.851599 | orchestrator | + multiattach = false 2026-03-23 00:02:29.851603 | orchestrator | + source_type = "volume" 2026-03-23 00:02:29.851606 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.851610 | orchestrator | } 2026-03-23 00:02:29.851614 | orchestrator | 2026-03-23 00:02:29.851618 | orchestrator | + network { 2026-03-23 00:02:29.851622 | orchestrator | + access_network = false 2026-03-23 00:02:29.851625 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-23 00:02:29.851629 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-23 00:02:29.851633 | orchestrator | + mac = (known after apply) 2026-03-23 00:02:29.851637 | orchestrator | + name = (known after apply) 2026-03-23 00:02:29.851641 | orchestrator | + port = (known after apply) 2026-03-23 00:02:29.851644 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.851648 | orchestrator | } 2026-03-23 00:02:29.851652 | orchestrator | } 2026-03-23 00:02:29.851657 | orchestrator | 2026-03-23 00:02:29.851661 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-23 00:02:29.851665 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-23 00:02:29.851669 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-23 00:02:29.851673 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-23 00:02:29.851677 | orchestrator | + all_metadata = (known after apply) 2026-03-23 00:02:29.851680 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.851684 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.851688 | orchestrator | + config_drive = true 2026-03-23 00:02:29.851692 | orchestrator | + created = (known after apply) 2026-03-23 00:02:29.851696 | orchestrator | + flavor_id = (known after apply) 2026-03-23 00:02:29.851699 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-23 00:02:29.851703 | orchestrator | + force_delete = false 2026-03-23 00:02:29.851707 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-23 00:02:29.851711 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.851714 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.851721 | orchestrator | + image_name = (known after apply) 2026-03-23 00:02:29.851725 | orchestrator | + key_pair = "testbed" 2026-03-23 00:02:29.851729 | orchestrator | + name = "testbed-node-2" 2026-03-23 00:02:29.851733 | orchestrator | + power_state = "active" 2026-03-23 00:02:29.851736 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.851740 | orchestrator | + security_groups = (known after apply) 2026-03-23 00:02:29.851744 | orchestrator | + stop_before_destroy = false 2026-03-23 00:02:29.851748 | orchestrator | + updated = (known after apply) 2026-03-23 00:02:29.851752 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-23 00:02:29.851756 | orchestrator | 2026-03-23 00:02:29.851759 | orchestrator | + block_device { 2026-03-23 00:02:29.851763 | orchestrator | + boot_index = 0 2026-03-23 00:02:29.851767 | orchestrator | + delete_on_termination = false 2026-03-23 00:02:29.851771 | orchestrator | + destination_type = "volume" 2026-03-23 00:02:29.851774 | orchestrator | + multiattach = false 2026-03-23 00:02:29.851778 | orchestrator | + source_type = "volume" 2026-03-23 00:02:29.851782 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.851786 | orchestrator | } 2026-03-23 00:02:29.851790 | orchestrator | 2026-03-23 00:02:29.851822 | orchestrator | + network { 2026-03-23 00:02:29.851827 | orchestrator | + access_network = false 2026-03-23 00:02:29.851831 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-23 00:02:29.851834 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-23 00:02:29.851838 | orchestrator | + mac = (known after apply) 2026-03-23 00:02:29.851842 | orchestrator | + name = (known after apply) 2026-03-23 00:02:29.851846 | orchestrator | + port = (known after apply) 2026-03-23 00:02:29.851849 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.851853 | orchestrator | } 2026-03-23 00:02:29.851857 | orchestrator | } 2026-03-23 00:02:29.851863 | orchestrator | 2026-03-23 00:02:29.851872 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-23 00:02:29.851876 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-23 00:02:29.851880 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-23 00:02:29.851884 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-23 00:02:29.851888 | orchestrator | + all_metadata = (known after apply) 2026-03-23 00:02:29.851891 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.851895 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.851899 | orchestrator | + config_drive = true 2026-03-23 00:02:29.851903 | orchestrator | + created = (known after apply) 2026-03-23 00:02:29.851906 | orchestrator | + flavor_id = (known after apply) 2026-03-23 00:02:29.851910 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-23 00:02:29.851914 | orchestrator | + force_delete = false 2026-03-23 00:02:29.851918 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-23 00:02:29.851922 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.851925 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.851929 | orchestrator | + image_name = (known after apply) 2026-03-23 00:02:29.851933 | orchestrator | + key_pair = "testbed" 2026-03-23 00:02:29.851937 | orchestrator | + name = "testbed-node-3" 2026-03-23 00:02:29.851940 | orchestrator | + power_state = "active" 2026-03-23 00:02:29.851944 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.851948 | orchestrator | + security_groups = (known after apply) 2026-03-23 00:02:29.851952 | orchestrator | + stop_before_destroy = false 2026-03-23 00:02:29.851956 | orchestrator | + updated = (known after apply) 2026-03-23 00:02:29.851959 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-23 00:02:29.851963 | orchestrator | 2026-03-23 00:02:29.851967 | orchestrator | + block_device { 2026-03-23 00:02:29.851971 | orchestrator | + boot_index = 0 2026-03-23 00:02:29.851975 | orchestrator | + delete_on_termination = false 2026-03-23 00:02:29.851978 | orchestrator | + destination_type = "volume" 2026-03-23 00:02:29.851991 | orchestrator | + multiattach = false 2026-03-23 00:02:29.851994 | orchestrator | + source_type = "volume" 2026-03-23 00:02:29.851998 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.852002 | orchestrator | } 2026-03-23 00:02:29.852006 | orchestrator | 2026-03-23 00:02:29.852010 | orchestrator | + network { 2026-03-23 00:02:29.852013 | orchestrator | + access_network = false 2026-03-23 00:02:29.852017 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-23 00:02:29.852021 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-23 00:02:29.852025 | orchestrator | + mac = (known after apply) 2026-03-23 00:02:29.852028 | orchestrator | + name = (known after apply) 2026-03-23 00:02:29.852032 | orchestrator | + port = (known after apply) 2026-03-23 00:02:29.852036 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.852040 | orchestrator | } 2026-03-23 00:02:29.852044 | orchestrator | } 2026-03-23 00:02:29.852049 | orchestrator | 2026-03-23 00:02:29.852053 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-23 00:02:29.852057 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-23 00:02:29.852061 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-23 00:02:29.852064 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-23 00:02:29.852068 | orchestrator | + all_metadata = (known after apply) 2026-03-23 00:02:29.852072 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.852076 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.852080 | orchestrator | + config_drive = true 2026-03-23 00:02:29.852083 | orchestrator | + created = (known after apply) 2026-03-23 00:02:29.852087 | orchestrator | + flavor_id = (known after apply) 2026-03-23 00:02:29.852091 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-23 00:02:29.852095 | orchestrator | + force_delete = false 2026-03-23 00:02:29.852098 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-23 00:02:29.852102 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852106 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.852110 | orchestrator | + image_name = (known after apply) 2026-03-23 00:02:29.852113 | orchestrator | + key_pair = "testbed" 2026-03-23 00:02:29.852117 | orchestrator | + name = "testbed-node-4" 2026-03-23 00:02:29.852121 | orchestrator | + power_state = "active" 2026-03-23 00:02:29.852125 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852129 | orchestrator | + security_groups = (known after apply) 2026-03-23 00:02:29.852132 | orchestrator | + stop_before_destroy = false 2026-03-23 00:02:29.852136 | orchestrator | + updated = (known after apply) 2026-03-23 00:02:29.852140 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-23 00:02:29.852144 | orchestrator | 2026-03-23 00:02:29.852148 | orchestrator | + block_device { 2026-03-23 00:02:29.852151 | orchestrator | + boot_index = 0 2026-03-23 00:02:29.852155 | orchestrator | + delete_on_termination = false 2026-03-23 00:02:29.852159 | orchestrator | + destination_type = "volume" 2026-03-23 00:02:29.852163 | orchestrator | + multiattach = false 2026-03-23 00:02:29.852166 | orchestrator | + source_type = "volume" 2026-03-23 00:02:29.852170 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.852174 | orchestrator | } 2026-03-23 00:02:29.852178 | orchestrator | 2026-03-23 00:02:29.852182 | orchestrator | + network { 2026-03-23 00:02:29.852185 | orchestrator | + access_network = false 2026-03-23 00:02:29.852189 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-23 00:02:29.852193 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-23 00:02:29.852197 | orchestrator | + mac = (known after apply) 2026-03-23 00:02:29.852201 | orchestrator | + name = (known after apply) 2026-03-23 00:02:29.852204 | orchestrator | + port = (known after apply) 2026-03-23 00:02:29.852208 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.852212 | orchestrator | } 2026-03-23 00:02:29.852216 | orchestrator | } 2026-03-23 00:02:29.852225 | orchestrator | 2026-03-23 00:02:29.852229 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-23 00:02:29.852232 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-23 00:02:29.852236 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-23 00:02:29.852240 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-23 00:02:29.852244 | orchestrator | + all_metadata = (known after apply) 2026-03-23 00:02:29.852247 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.852251 | orchestrator | + availability_zone = "nova" 2026-03-23 00:02:29.852255 | orchestrator | + config_drive = true 2026-03-23 00:02:29.852259 | orchestrator | + created = (known after apply) 2026-03-23 00:02:29.852262 | orchestrator | + flavor_id = (known after apply) 2026-03-23 00:02:29.852266 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-23 00:02:29.852270 | orchestrator | + force_delete = false 2026-03-23 00:02:29.852274 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-23 00:02:29.852278 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852281 | orchestrator | + image_id = (known after apply) 2026-03-23 00:02:29.852285 | orchestrator | + image_name = (known after apply) 2026-03-23 00:02:29.852289 | orchestrator | + key_pair = "testbed" 2026-03-23 00:02:29.852293 | orchestrator | + name = "testbed-node-5" 2026-03-23 00:02:29.852296 | orchestrator | + power_state = "active" 2026-03-23 00:02:29.852300 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852304 | orchestrator | + security_groups = (known after apply) 2026-03-23 00:02:29.852308 | orchestrator | + stop_before_destroy = false 2026-03-23 00:02:29.852311 | orchestrator | + updated = (known after apply) 2026-03-23 00:02:29.852315 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-23 00:02:29.852319 | orchestrator | 2026-03-23 00:02:29.852337 | orchestrator | + block_device { 2026-03-23 00:02:29.852340 | orchestrator | + boot_index = 0 2026-03-23 00:02:29.852344 | orchestrator | + delete_on_termination = false 2026-03-23 00:02:29.852348 | orchestrator | + destination_type = "volume" 2026-03-23 00:02:29.852352 | orchestrator | + multiattach = false 2026-03-23 00:02:29.852355 | orchestrator | + source_type = "volume" 2026-03-23 00:02:29.852359 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.852363 | orchestrator | } 2026-03-23 00:02:29.852367 | orchestrator | 2026-03-23 00:02:29.852370 | orchestrator | + network { 2026-03-23 00:02:29.852374 | orchestrator | + access_network = false 2026-03-23 00:02:29.852378 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-23 00:02:29.852382 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-23 00:02:29.852385 | orchestrator | + mac = (known after apply) 2026-03-23 00:02:29.852389 | orchestrator | + name = (known after apply) 2026-03-23 00:02:29.852393 | orchestrator | + port = (known after apply) 2026-03-23 00:02:29.852397 | orchestrator | + uuid = (known after apply) 2026-03-23 00:02:29.852400 | orchestrator | } 2026-03-23 00:02:29.852404 | orchestrator | } 2026-03-23 00:02:29.852408 | orchestrator | 2026-03-23 00:02:29.852412 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-23 00:02:29.852416 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-23 00:02:29.852419 | orchestrator | + fingerprint = (known after apply) 2026-03-23 00:02:29.852423 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852427 | orchestrator | + name = "testbed" 2026-03-23 00:02:29.852431 | orchestrator | + private_key = (sensitive value) 2026-03-23 00:02:29.852435 | orchestrator | + public_key = (known after apply) 2026-03-23 00:02:29.852438 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852442 | orchestrator | + user_id = (known after apply) 2026-03-23 00:02:29.852446 | orchestrator | } 2026-03-23 00:02:29.852449 | orchestrator | 2026-03-23 00:02:29.852453 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-23 00:02:29.852457 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-23 00:02:29.852464 | orchestrator | + device = (known after apply) 2026-03-23 00:02:29.852468 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852472 | orchestrator | + instance_id = (known after apply) 2026-03-23 00:02:29.852476 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852482 | orchestrator | + volume_id = (known after apply) 2026-03-23 00:02:29.852486 | orchestrator | } 2026-03-23 00:02:29.852489 | orchestrator | 2026-03-23 00:02:29.852493 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-23 00:02:29.852497 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-23 00:02:29.852501 | orchestrator | + device = (known after apply) 2026-03-23 00:02:29.852505 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852508 | orchestrator | + instance_id = (known after apply) 2026-03-23 00:02:29.852512 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852516 | orchestrator | + volume_id = (known after apply) 2026-03-23 00:02:29.852519 | orchestrator | } 2026-03-23 00:02:29.852523 | orchestrator | 2026-03-23 00:02:29.852527 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-23 00:02:29.852531 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-23 00:02:29.852534 | orchestrator | + device = (known after apply) 2026-03-23 00:02:29.852538 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852542 | orchestrator | + instance_id = (known after apply) 2026-03-23 00:02:29.852546 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852549 | orchestrator | + volume_id = (known after apply) 2026-03-23 00:02:29.852553 | orchestrator | } 2026-03-23 00:02:29.852559 | orchestrator | 2026-03-23 00:02:29.852563 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-23 00:02:29.852567 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-23 00:02:29.852571 | orchestrator | + device = (known after apply) 2026-03-23 00:02:29.852575 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852578 | orchestrator | + instance_id = (known after apply) 2026-03-23 00:02:29.852582 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852586 | orchestrator | + volume_id = (known after apply) 2026-03-23 00:02:29.852589 | orchestrator | } 2026-03-23 00:02:29.852593 | orchestrator | 2026-03-23 00:02:29.852597 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-23 00:02:29.852601 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-23 00:02:29.852605 | orchestrator | + device = (known after apply) 2026-03-23 00:02:29.852608 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852612 | orchestrator | + instance_id = (known after apply) 2026-03-23 00:02:29.852616 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852619 | orchestrator | + volume_id = (known after apply) 2026-03-23 00:02:29.852623 | orchestrator | } 2026-03-23 00:02:29.852627 | orchestrator | 2026-03-23 00:02:29.852631 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-23 00:02:29.852634 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-23 00:02:29.852638 | orchestrator | + device = (known after apply) 2026-03-23 00:02:29.852642 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852645 | orchestrator | + instance_id = (known after apply) 2026-03-23 00:02:29.852649 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852653 | orchestrator | + volume_id = (known after apply) 2026-03-23 00:02:29.852657 | orchestrator | } 2026-03-23 00:02:29.852660 | orchestrator | 2026-03-23 00:02:29.852664 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-23 00:02:29.852668 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-23 00:02:29.852672 | orchestrator | + device = (known after apply) 2026-03-23 00:02:29.852675 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852679 | orchestrator | + instance_id = (known after apply) 2026-03-23 00:02:29.852683 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852690 | orchestrator | + volume_id = (known after apply) 2026-03-23 00:02:29.852694 | orchestrator | } 2026-03-23 00:02:29.852698 | orchestrator | 2026-03-23 00:02:29.852701 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-23 00:02:29.852705 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-23 00:02:29.852709 | orchestrator | + device = (known after apply) 2026-03-23 00:02:29.852713 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852716 | orchestrator | + instance_id = (known after apply) 2026-03-23 00:02:29.852720 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852724 | orchestrator | + volume_id = (known after apply) 2026-03-23 00:02:29.852728 | orchestrator | } 2026-03-23 00:02:29.852731 | orchestrator | 2026-03-23 00:02:29.852735 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-23 00:02:29.852739 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-23 00:02:29.852743 | orchestrator | + device = (known after apply) 2026-03-23 00:02:29.852746 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852750 | orchestrator | + instance_id = (known after apply) 2026-03-23 00:02:29.852754 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852758 | orchestrator | + volume_id = (known after apply) 2026-03-23 00:02:29.852761 | orchestrator | } 2026-03-23 00:02:29.852765 | orchestrator | 2026-03-23 00:02:29.852769 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-23 00:02:29.852773 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-23 00:02:29.852777 | orchestrator | + fixed_ip = (known after apply) 2026-03-23 00:02:29.852781 | orchestrator | + floating_ip = (known after apply) 2026-03-23 00:02:29.852785 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852788 | orchestrator | + port_id = (known after apply) 2026-03-23 00:02:29.852792 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852807 | orchestrator | } 2026-03-23 00:02:29.852811 | orchestrator | 2026-03-23 00:02:29.852815 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-23 00:02:29.852819 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-23 00:02:29.852823 | orchestrator | + address = (known after apply) 2026-03-23 00:02:29.852827 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.852833 | orchestrator | + dns_domain = (known after apply) 2026-03-23 00:02:29.852837 | orchestrator | + dns_name = (known after apply) 2026-03-23 00:02:29.852840 | orchestrator | + fixed_ip = (known after apply) 2026-03-23 00:02:29.852844 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852848 | orchestrator | + pool = "public" 2026-03-23 00:02:29.852852 | orchestrator | + port_id = (known after apply) 2026-03-23 00:02:29.852856 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852859 | orchestrator | + subnet_id = (known after apply) 2026-03-23 00:02:29.852863 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.852867 | orchestrator | } 2026-03-23 00:02:29.852871 | orchestrator | 2026-03-23 00:02:29.852875 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-23 00:02:29.852879 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-23 00:02:29.852882 | orchestrator | + admin_state_up = (known after apply) 2026-03-23 00:02:29.852886 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.852890 | orchestrator | + availability_zone_hints = [ 2026-03-23 00:02:29.852894 | orchestrator | + "nova", 2026-03-23 00:02:29.852897 | orchestrator | ] 2026-03-23 00:02:29.852901 | orchestrator | + dns_domain = (known after apply) 2026-03-23 00:02:29.852905 | orchestrator | + external = (known after apply) 2026-03-23 00:02:29.852909 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.852912 | orchestrator | + mtu = (known after apply) 2026-03-23 00:02:29.852916 | orchestrator | + name = "net-testbed-management" 2026-03-23 00:02:29.852920 | orchestrator | + port_security_enabled = (known after apply) 2026-03-23 00:02:29.852927 | orchestrator | + qos_policy_id = (known after apply) 2026-03-23 00:02:29.852931 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.852934 | orchestrator | + shared = (known after apply) 2026-03-23 00:02:29.852942 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.852945 | orchestrator | + transparent_vlan = (known after apply) 2026-03-23 00:02:29.852949 | orchestrator | 2026-03-23 00:02:29.852953 | orchestrator | + segments (known after apply) 2026-03-23 00:02:29.852957 | orchestrator | } 2026-03-23 00:02:29.852960 | orchestrator | 2026-03-23 00:02:29.852964 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-23 00:02:29.852968 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-23 00:02:29.852972 | orchestrator | + admin_state_up = (known after apply) 2026-03-23 00:02:29.852976 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-23 00:02:29.852979 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-23 00:02:29.852983 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.852987 | orchestrator | + device_id = (known after apply) 2026-03-23 00:02:29.852990 | orchestrator | + device_owner = (known after apply) 2026-03-23 00:02:29.852994 | orchestrator | + dns_assignment = (known after apply) 2026-03-23 00:02:29.852998 | orchestrator | + dns_name = (known after apply) 2026-03-23 00:02:29.853002 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.853005 | orchestrator | + mac_address = (known after apply) 2026-03-23 00:02:29.853009 | orchestrator | + network_id = (known after apply) 2026-03-23 00:02:29.853013 | orchestrator | + port_security_enabled = (known after apply) 2026-03-23 00:02:29.853016 | orchestrator | + qos_policy_id = (known after apply) 2026-03-23 00:02:29.853020 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.853024 | orchestrator | + security_group_ids = (known after apply) 2026-03-23 00:02:29.853028 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.853031 | orchestrator | 2026-03-23 00:02:29.853035 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853039 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-23 00:02:29.853043 | orchestrator | } 2026-03-23 00:02:29.853046 | orchestrator | 2026-03-23 00:02:29.853050 | orchestrator | + binding (known after apply) 2026-03-23 00:02:29.853054 | orchestrator | 2026-03-23 00:02:29.853058 | orchestrator | + fixed_ip { 2026-03-23 00:02:29.853061 | orchestrator | + ip_address = "192.168.16.5" 2026-03-23 00:02:29.853065 | orchestrator | + subnet_id = (known after apply) 2026-03-23 00:02:29.853069 | orchestrator | } 2026-03-23 00:02:29.853073 | orchestrator | } 2026-03-23 00:02:29.853077 | orchestrator | 2026-03-23 00:02:29.853080 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-23 00:02:29.853084 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-23 00:02:29.853088 | orchestrator | + admin_state_up = (known after apply) 2026-03-23 00:02:29.853092 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-23 00:02:29.853095 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-23 00:02:29.853099 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.853103 | orchestrator | + device_id = (known after apply) 2026-03-23 00:02:29.853106 | orchestrator | + device_owner = (known after apply) 2026-03-23 00:02:29.853110 | orchestrator | + dns_assignment = (known after apply) 2026-03-23 00:02:29.853114 | orchestrator | + dns_name = (known after apply) 2026-03-23 00:02:29.853118 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.853121 | orchestrator | + mac_address = (known after apply) 2026-03-23 00:02:29.853125 | orchestrator | + network_id = (known after apply) 2026-03-23 00:02:29.853129 | orchestrator | + port_security_enabled = (known after apply) 2026-03-23 00:02:29.853132 | orchestrator | + qos_policy_id = (known after apply) 2026-03-23 00:02:29.853136 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.853142 | orchestrator | + security_group_ids = (known after apply) 2026-03-23 00:02:29.853146 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.853150 | orchestrator | 2026-03-23 00:02:29.853153 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853157 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-23 00:02:29.853161 | orchestrator | } 2026-03-23 00:02:29.853165 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853168 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-23 00:02:29.853172 | orchestrator | } 2026-03-23 00:02:29.853176 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853179 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-23 00:02:29.853183 | orchestrator | } 2026-03-23 00:02:29.853187 | orchestrator | 2026-03-23 00:02:29.853191 | orchestrator | + binding (known after apply) 2026-03-23 00:02:29.853194 | orchestrator | 2026-03-23 00:02:29.853198 | orchestrator | + fixed_ip { 2026-03-23 00:02:29.853202 | orchestrator | + ip_address = "192.168.16.10" 2026-03-23 00:02:29.853206 | orchestrator | + subnet_id = (known after apply) 2026-03-23 00:02:29.853209 | orchestrator | } 2026-03-23 00:02:29.853213 | orchestrator | } 2026-03-23 00:02:29.853217 | orchestrator | 2026-03-23 00:02:29.853221 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-23 00:02:29.853224 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-23 00:02:29.853230 | orchestrator | + admin_state_up = (known after apply) 2026-03-23 00:02:29.853234 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-23 00:02:29.853238 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-23 00:02:29.853242 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.853245 | orchestrator | + device_id = (known after apply) 2026-03-23 00:02:29.853249 | orchestrator | + device_owner = (known after apply) 2026-03-23 00:02:29.853253 | orchestrator | + dns_assignment = (known after apply) 2026-03-23 00:02:29.853256 | orchestrator | + dns_name = (known after apply) 2026-03-23 00:02:29.853260 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.853264 | orchestrator | + mac_address = (known after apply) 2026-03-23 00:02:29.853268 | orchestrator | + network_id = (known after apply) 2026-03-23 00:02:29.853271 | orchestrator | + port_security_enabled = (known after apply) 2026-03-23 00:02:29.853275 | orchestrator | + qos_policy_id = (known after apply) 2026-03-23 00:02:29.853279 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.853282 | orchestrator | + security_group_ids = (known after apply) 2026-03-23 00:02:29.853286 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.853290 | orchestrator | 2026-03-23 00:02:29.853293 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853297 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-23 00:02:29.853301 | orchestrator | } 2026-03-23 00:02:29.853304 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853308 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-23 00:02:29.853312 | orchestrator | } 2026-03-23 00:02:29.853316 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853319 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-23 00:02:29.853323 | orchestrator | } 2026-03-23 00:02:29.853327 | orchestrator | 2026-03-23 00:02:29.853331 | orchestrator | + binding (known after apply) 2026-03-23 00:02:29.853334 | orchestrator | 2026-03-23 00:02:29.853338 | orchestrator | + fixed_ip { 2026-03-23 00:02:29.853344 | orchestrator | + ip_address = "192.168.16.11" 2026-03-23 00:02:29.853348 | orchestrator | + subnet_id = (known after apply) 2026-03-23 00:02:29.853352 | orchestrator | } 2026-03-23 00:02:29.853356 | orchestrator | } 2026-03-23 00:02:29.853359 | orchestrator | 2026-03-23 00:02:29.853363 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-23 00:02:29.853367 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-23 00:02:29.853370 | orchestrator | + admin_state_up = (known after apply) 2026-03-23 00:02:29.853374 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-23 00:02:29.853378 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-23 00:02:29.853382 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.853389 | orchestrator | + device_id = (known after apply) 2026-03-23 00:02:29.853392 | orchestrator | + device_owner = (known after apply) 2026-03-23 00:02:29.853396 | orchestrator | + dns_assignment = (known after apply) 2026-03-23 00:02:29.853400 | orchestrator | + dns_name = (known after apply) 2026-03-23 00:02:29.853403 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.853407 | orchestrator | + mac_address = (known after apply) 2026-03-23 00:02:29.853411 | orchestrator | + network_id = (known after apply) 2026-03-23 00:02:29.853414 | orchestrator | + port_security_enabled = (known after apply) 2026-03-23 00:02:29.853418 | orchestrator | + qos_policy_id = (known after apply) 2026-03-23 00:02:29.853422 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.853426 | orchestrator | + security_group_ids = (known after apply) 2026-03-23 00:02:29.853429 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.853433 | orchestrator | 2026-03-23 00:02:29.853437 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853440 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-23 00:02:29.853444 | orchestrator | } 2026-03-23 00:02:29.853448 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853451 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-23 00:02:29.853455 | orchestrator | } 2026-03-23 00:02:29.853459 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853463 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-23 00:02:29.853466 | orchestrator | } 2026-03-23 00:02:29.853470 | orchestrator | 2026-03-23 00:02:29.853474 | orchestrator | + binding (known after apply) 2026-03-23 00:02:29.853477 | orchestrator | 2026-03-23 00:02:29.853481 | orchestrator | + fixed_ip { 2026-03-23 00:02:29.853485 | orchestrator | + ip_address = "192.168.16.12" 2026-03-23 00:02:29.853489 | orchestrator | + subnet_id = (known after apply) 2026-03-23 00:02:29.853492 | orchestrator | } 2026-03-23 00:02:29.853496 | orchestrator | } 2026-03-23 00:02:29.853500 | orchestrator | 2026-03-23 00:02:29.853503 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-23 00:02:29.853507 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-23 00:02:29.853511 | orchestrator | + admin_state_up = (known after apply) 2026-03-23 00:02:29.853515 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-23 00:02:29.853519 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-23 00:02:29.853522 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.853526 | orchestrator | + device_id = (known after apply) 2026-03-23 00:02:29.853530 | orchestrator | + device_owner = (known after apply) 2026-03-23 00:02:29.853533 | orchestrator | + dns_assignment = (known after apply) 2026-03-23 00:02:29.853537 | orchestrator | + dns_name = (known after apply) 2026-03-23 00:02:29.853541 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.853544 | orchestrator | + mac_address = (known after apply) 2026-03-23 00:02:29.853548 | orchestrator | + network_id = (known after apply) 2026-03-23 00:02:29.853552 | orchestrator | + port_security_enabled = (known after apply) 2026-03-23 00:02:29.853555 | orchestrator | + qos_policy_id = (known after apply) 2026-03-23 00:02:29.853559 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.853563 | orchestrator | + security_group_ids = (known after apply) 2026-03-23 00:02:29.853566 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.853570 | orchestrator | 2026-03-23 00:02:29.853574 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853578 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-23 00:02:29.853581 | orchestrator | } 2026-03-23 00:02:29.853585 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853589 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-23 00:02:29.853592 | orchestrator | } 2026-03-23 00:02:29.853596 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853600 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-23 00:02:29.853604 | orchestrator | } 2026-03-23 00:02:29.853607 | orchestrator | 2026-03-23 00:02:29.853613 | orchestrator | + binding (known after apply) 2026-03-23 00:02:29.853617 | orchestrator | 2026-03-23 00:02:29.853621 | orchestrator | + fixed_ip { 2026-03-23 00:02:29.853625 | orchestrator | + ip_address = "192.168.16.13" 2026-03-23 00:02:29.853628 | orchestrator | + subnet_id = (known after apply) 2026-03-23 00:02:29.853632 | orchestrator | } 2026-03-23 00:02:29.853636 | orchestrator | } 2026-03-23 00:02:29.853640 | orchestrator | 2026-03-23 00:02:29.853643 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-23 00:02:29.853647 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-23 00:02:29.853651 | orchestrator | + admin_state_up = (known after apply) 2026-03-23 00:02:29.853655 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-23 00:02:29.853658 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-23 00:02:29.853662 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.853666 | orchestrator | + device_id = (known after apply) 2026-03-23 00:02:29.853669 | orchestrator | + device_owner = (known after apply) 2026-03-23 00:02:29.853673 | orchestrator | + dns_assignment = (known after apply) 2026-03-23 00:02:29.853677 | orchestrator | + dns_name = (known after apply) 2026-03-23 00:02:29.853683 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.853687 | orchestrator | + mac_address = (known after apply) 2026-03-23 00:02:29.853690 | orchestrator | + network_id = (known after apply) 2026-03-23 00:02:29.853694 | orchestrator | + port_security_enabled = (known after apply) 2026-03-23 00:02:29.853698 | orchestrator | + qos_policy_id = (known after apply) 2026-03-23 00:02:29.853701 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.853705 | orchestrator | + security_group_ids = (known after apply) 2026-03-23 00:02:29.853709 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.853713 | orchestrator | 2026-03-23 00:02:29.853717 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853723 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-23 00:02:29.853727 | orchestrator | } 2026-03-23 00:02:29.853730 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853734 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-23 00:02:29.853738 | orchestrator | } 2026-03-23 00:02:29.853741 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853749 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-23 00:02:29.853753 | orchestrator | } 2026-03-23 00:02:29.853757 | orchestrator | 2026-03-23 00:02:29.853761 | orchestrator | + binding (known after apply) 2026-03-23 00:02:29.853764 | orchestrator | 2026-03-23 00:02:29.853768 | orchestrator | + fixed_ip { 2026-03-23 00:02:29.853772 | orchestrator | + ip_address = "192.168.16.14" 2026-03-23 00:02:29.853776 | orchestrator | + subnet_id = (known after apply) 2026-03-23 00:02:29.853779 | orchestrator | } 2026-03-23 00:02:29.853783 | orchestrator | } 2026-03-23 00:02:29.853787 | orchestrator | 2026-03-23 00:02:29.853791 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-23 00:02:29.853805 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-23 00:02:29.853809 | orchestrator | + admin_state_up = (known after apply) 2026-03-23 00:02:29.853813 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-23 00:02:29.853817 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-23 00:02:29.853821 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.853825 | orchestrator | + device_id = (known after apply) 2026-03-23 00:02:29.853828 | orchestrator | + device_owner = (known after apply) 2026-03-23 00:02:29.853832 | orchestrator | + dns_assignment = (known after apply) 2026-03-23 00:02:29.853836 | orchestrator | + dns_name = (known after apply) 2026-03-23 00:02:29.853840 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.853843 | orchestrator | + mac_address = (known after apply) 2026-03-23 00:02:29.853847 | orchestrator | + network_id = (known after apply) 2026-03-23 00:02:29.853851 | orchestrator | + port_security_enabled = (known after apply) 2026-03-23 00:02:29.853855 | orchestrator | + qos_policy_id = (known after apply) 2026-03-23 00:02:29.853861 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.853865 | orchestrator | + security_group_ids = (known after apply) 2026-03-23 00:02:29.853869 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.853873 | orchestrator | 2026-03-23 00:02:29.853877 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853880 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-23 00:02:29.853884 | orchestrator | } 2026-03-23 00:02:29.853888 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853892 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-23 00:02:29.853896 | orchestrator | } 2026-03-23 00:02:29.853899 | orchestrator | + allowed_address_pairs { 2026-03-23 00:02:29.853903 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-23 00:02:29.853907 | orchestrator | } 2026-03-23 00:02:29.853911 | orchestrator | 2026-03-23 00:02:29.853915 | orchestrator | + binding (known after apply) 2026-03-23 00:02:29.853918 | orchestrator | 2026-03-23 00:02:29.853922 | orchestrator | + fixed_ip { 2026-03-23 00:02:29.853926 | orchestrator | + ip_address = "192.168.16.15" 2026-03-23 00:02:29.853930 | orchestrator | + subnet_id = (known after apply) 2026-03-23 00:02:29.853934 | orchestrator | } 2026-03-23 00:02:29.853937 | orchestrator | } 2026-03-23 00:02:29.853941 | orchestrator | 2026-03-23 00:02:29.853945 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-23 00:02:29.853949 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-23 00:02:29.853953 | orchestrator | + force_destroy = false 2026-03-23 00:02:29.853957 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.853960 | orchestrator | + port_id = (known after apply) 2026-03-23 00:02:29.853964 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.853968 | orchestrator | + router_id = (known after apply) 2026-03-23 00:02:29.853972 | orchestrator | + subnet_id = (known after apply) 2026-03-23 00:02:29.853975 | orchestrator | } 2026-03-23 00:02:29.853979 | orchestrator | 2026-03-23 00:02:29.853983 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-23 00:02:29.853987 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-23 00:02:29.853991 | orchestrator | + admin_state_up = (known after apply) 2026-03-23 00:02:29.853994 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.853998 | orchestrator | + availability_zone_hints = [ 2026-03-23 00:02:29.854002 | orchestrator | + "nova", 2026-03-23 00:02:29.854006 | orchestrator | ] 2026-03-23 00:02:29.854010 | orchestrator | + distributed = (known after apply) 2026-03-23 00:02:29.854025 | orchestrator | + enable_snat = (known after apply) 2026-03-23 00:02:29.854030 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-23 00:02:29.854033 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-23 00:02:29.854037 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854041 | orchestrator | + name = "testbed" 2026-03-23 00:02:29.854045 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.854048 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.854052 | orchestrator | 2026-03-23 00:02:29.854056 | orchestrator | + external_fixed_ip (known after apply) 2026-03-23 00:02:29.854060 | orchestrator | } 2026-03-23 00:02:29.854063 | orchestrator | 2026-03-23 00:02:29.854067 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-23 00:02:29.854072 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-23 00:02:29.854076 | orchestrator | + description = "ssh" 2026-03-23 00:02:29.854079 | orchestrator | + direction = "ingress" 2026-03-23 00:02:29.854083 | orchestrator | + ethertype = "IPv4" 2026-03-23 00:02:29.854087 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854091 | orchestrator | + port_range_max = 22 2026-03-23 00:02:29.854094 | orchestrator | + port_range_min = 22 2026-03-23 00:02:29.854098 | orchestrator | + protocol = "tcp" 2026-03-23 00:02:29.854102 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.854110 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-23 00:02:29.854114 | orchestrator | + remote_group_id = (known after apply) 2026-03-23 00:02:29.854117 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-23 00:02:29.854121 | orchestrator | + security_group_id = (known after apply) 2026-03-23 00:02:29.854125 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.854129 | orchestrator | } 2026-03-23 00:02:29.854133 | orchestrator | 2026-03-23 00:02:29.854137 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-23 00:02:29.854140 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-23 00:02:29.854144 | orchestrator | + description = "wireguard" 2026-03-23 00:02:29.854148 | orchestrator | + direction = "ingress" 2026-03-23 00:02:29.854152 | orchestrator | + ethertype = "IPv4" 2026-03-23 00:02:29.854155 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854159 | orchestrator | + port_range_max = 51820 2026-03-23 00:02:29.854163 | orchestrator | + port_range_min = 51820 2026-03-23 00:02:29.854167 | orchestrator | + protocol = "udp" 2026-03-23 00:02:29.854173 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.854177 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-23 00:02:29.854181 | orchestrator | + remote_group_id = (known after apply) 2026-03-23 00:02:29.854184 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-23 00:02:29.854188 | orchestrator | + security_group_id = (known after apply) 2026-03-23 00:02:29.854192 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.854196 | orchestrator | } 2026-03-23 00:02:29.854200 | orchestrator | 2026-03-23 00:02:29.854203 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-23 00:02:29.854207 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-23 00:02:29.854213 | orchestrator | + direction = "ingress" 2026-03-23 00:02:29.854217 | orchestrator | + ethertype = "IPv4" 2026-03-23 00:02:29.854225 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854229 | orchestrator | + protocol = "tcp" 2026-03-23 00:02:29.854233 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.854236 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-23 00:02:29.854240 | orchestrator | + remote_group_id = (known after apply) 2026-03-23 00:02:29.854244 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-23 00:02:29.854248 | orchestrator | + security_group_id = (known after apply) 2026-03-23 00:02:29.854251 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.854255 | orchestrator | } 2026-03-23 00:02:29.854259 | orchestrator | 2026-03-23 00:02:29.854263 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-23 00:02:29.854267 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-23 00:02:29.854270 | orchestrator | + direction = "ingress" 2026-03-23 00:02:29.854274 | orchestrator | + ethertype = "IPv4" 2026-03-23 00:02:29.854278 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854282 | orchestrator | + protocol = "udp" 2026-03-23 00:02:29.854285 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.854289 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-23 00:02:29.854293 | orchestrator | + remote_group_id = (known after apply) 2026-03-23 00:02:29.854297 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-23 00:02:29.854300 | orchestrator | + security_group_id = (known after apply) 2026-03-23 00:02:29.854304 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.854308 | orchestrator | } 2026-03-23 00:02:29.854312 | orchestrator | 2026-03-23 00:02:29.854315 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-23 00:02:29.854322 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-23 00:02:29.854326 | orchestrator | + direction = "ingress" 2026-03-23 00:02:29.854329 | orchestrator | + ethertype = "IPv4" 2026-03-23 00:02:29.854333 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854337 | orchestrator | + protocol = "icmp" 2026-03-23 00:02:29.854341 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.854344 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-23 00:02:29.854348 | orchestrator | + remote_group_id = (known after apply) 2026-03-23 00:02:29.854352 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-23 00:02:29.854356 | orchestrator | + security_group_id = (known after apply) 2026-03-23 00:02:29.854360 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.854363 | orchestrator | } 2026-03-23 00:02:29.854367 | orchestrator | 2026-03-23 00:02:29.854371 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-23 00:02:29.854375 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-23 00:02:29.854378 | orchestrator | + direction = "ingress" 2026-03-23 00:02:29.854382 | orchestrator | + ethertype = "IPv4" 2026-03-23 00:02:29.854386 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854390 | orchestrator | + protocol = "tcp" 2026-03-23 00:02:29.854393 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.854397 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-23 00:02:29.854401 | orchestrator | + remote_group_id = (known after apply) 2026-03-23 00:02:29.854405 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-23 00:02:29.854408 | orchestrator | + security_group_id = (known after apply) 2026-03-23 00:02:29.854412 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.854416 | orchestrator | } 2026-03-23 00:02:29.854420 | orchestrator | 2026-03-23 00:02:29.854423 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-23 00:02:29.854427 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-23 00:02:29.854431 | orchestrator | + direction = "ingress" 2026-03-23 00:02:29.854435 | orchestrator | + ethertype = "IPv4" 2026-03-23 00:02:29.854438 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854442 | orchestrator | + protocol = "udp" 2026-03-23 00:02:29.854446 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.854450 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-23 00:02:29.854454 | orchestrator | + remote_group_id = (known after apply) 2026-03-23 00:02:29.854457 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-23 00:02:29.854461 | orchestrator | + security_group_id = (known after apply) 2026-03-23 00:02:29.854465 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.854469 | orchestrator | } 2026-03-23 00:02:29.854473 | orchestrator | 2026-03-23 00:02:29.854476 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-23 00:02:29.854480 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-23 00:02:29.854484 | orchestrator | + direction = "ingress" 2026-03-23 00:02:29.854488 | orchestrator | + ethertype = "IPv4" 2026-03-23 00:02:29.854491 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854495 | orchestrator | + protocol = "icmp" 2026-03-23 00:02:29.854502 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.854506 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-23 00:02:29.854510 | orchestrator | + remote_group_id = (known after apply) 2026-03-23 00:02:29.854513 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-23 00:02:29.854517 | orchestrator | + security_group_id = (known after apply) 2026-03-23 00:02:29.854521 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.854527 | orchestrator | } 2026-03-23 00:02:29.854531 | orchestrator | 2026-03-23 00:02:29.854535 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-23 00:02:29.854539 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-23 00:02:29.854543 | orchestrator | + description = "vrrp" 2026-03-23 00:02:29.854546 | orchestrator | + direction = "ingress" 2026-03-23 00:02:29.854550 | orchestrator | + ethertype = "IPv4" 2026-03-23 00:02:29.854554 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854558 | orchestrator | + protocol = "112" 2026-03-23 00:02:29.854561 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.854565 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-23 00:02:29.854569 | orchestrator | + remote_group_id = (known after apply) 2026-03-23 00:02:29.854573 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-23 00:02:29.854576 | orchestrator | + security_group_id = (known after apply) 2026-03-23 00:02:29.854580 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.854584 | orchestrator | } 2026-03-23 00:02:29.854588 | orchestrator | 2026-03-23 00:02:29.854591 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-23 00:02:29.854595 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-23 00:02:29.854599 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.854603 | orchestrator | + description = "management security group" 2026-03-23 00:02:29.854607 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854610 | orchestrator | + name = "testbed-management" 2026-03-23 00:02:29.854614 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.854618 | orchestrator | + stateful = (known after apply) 2026-03-23 00:02:29.854622 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.854625 | orchestrator | } 2026-03-23 00:02:29.854629 | orchestrator | 2026-03-23 00:02:29.854633 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-23 00:02:29.854637 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-23 00:02:29.854641 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.854644 | orchestrator | + description = "node security group" 2026-03-23 00:02:29.854648 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854652 | orchestrator | + name = "testbed-node" 2026-03-23 00:02:29.854655 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.854659 | orchestrator | + stateful = (known after apply) 2026-03-23 00:02:29.854663 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.854667 | orchestrator | } 2026-03-23 00:02:29.854670 | orchestrator | 2026-03-23 00:02:29.854674 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-23 00:02:29.854678 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-23 00:02:29.854682 | orchestrator | + all_tags = (known after apply) 2026-03-23 00:02:29.854685 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-23 00:02:29.854689 | orchestrator | + dns_nameservers = [ 2026-03-23 00:02:29.854693 | orchestrator | + "8.8.8.8", 2026-03-23 00:02:29.854697 | orchestrator | + "9.9.9.9", 2026-03-23 00:02:29.854701 | orchestrator | ] 2026-03-23 00:02:29.854704 | orchestrator | + enable_dhcp = true 2026-03-23 00:02:29.854708 | orchestrator | + gateway_ip = (known after apply) 2026-03-23 00:02:29.854714 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854718 | orchestrator | + ip_version = 4 2026-03-23 00:02:29.854722 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-23 00:02:29.854726 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-23 00:02:29.854730 | orchestrator | + name = "subnet-testbed-management" 2026-03-23 00:02:29.854734 | orchestrator | + network_id = (known after apply) 2026-03-23 00:02:29.854738 | orchestrator | + no_gateway = false 2026-03-23 00:02:29.854741 | orchestrator | + region = (known after apply) 2026-03-23 00:02:29.854745 | orchestrator | + service_types = (known after apply) 2026-03-23 00:02:29.854752 | orchestrator | + tenant_id = (known after apply) 2026-03-23 00:02:29.854756 | orchestrator | 2026-03-23 00:02:29.854759 | orchestrator | + allocation_pool { 2026-03-23 00:02:29.854763 | orchestrator | + end = "192.168.31.250" 2026-03-23 00:02:29.854767 | orchestrator | + start = "192.168.31.200" 2026-03-23 00:02:29.854771 | orchestrator | } 2026-03-23 00:02:29.854775 | orchestrator | } 2026-03-23 00:02:29.854779 | orchestrator | 2026-03-23 00:02:29.854782 | orchestrator | # terraform_data.image will be created 2026-03-23 00:02:29.854786 | orchestrator | + resource "terraform_data" "image" { 2026-03-23 00:02:29.854790 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854816 | orchestrator | + input = "Ubuntu 24.04" 2026-03-23 00:02:29.854823 | orchestrator | + output = (known after apply) 2026-03-23 00:02:29.854829 | orchestrator | } 2026-03-23 00:02:29.854837 | orchestrator | 2026-03-23 00:02:29.854842 | orchestrator | # terraform_data.image_node will be created 2026-03-23 00:02:29.854846 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-23 00:02:29.854850 | orchestrator | + id = (known after apply) 2026-03-23 00:02:29.854854 | orchestrator | + input = "Ubuntu 24.04" 2026-03-23 00:02:29.854857 | orchestrator | + output = (known after apply) 2026-03-23 00:02:29.854861 | orchestrator | } 2026-03-23 00:02:29.854865 | orchestrator | 2026-03-23 00:02:29.854869 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-23 00:02:29.854873 | orchestrator | 2026-03-23 00:02:29.854876 | orchestrator | Changes to Outputs: 2026-03-23 00:02:29.854880 | orchestrator | + manager_address = (sensitive value) 2026-03-23 00:02:29.854884 | orchestrator | + private_key = (sensitive value) 2026-03-23 00:02:30.048691 | orchestrator | terraform_data.image_node: Creating... 2026-03-23 00:02:30.049258 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=1ea8251f-0b91-d7b4-cde7-a6b73e0c0f56] 2026-03-23 00:02:30.872655 | orchestrator | terraform_data.image: Creating... 2026-03-23 00:02:30.872731 | orchestrator | terraform_data.image: Creation complete after 0s [id=7f7dbcbb-0d36-5306-89b0-0b528dd2a42e] 2026-03-23 00:02:30.885142 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-23 00:02:30.885184 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-23 00:02:30.904017 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-23 00:02:30.918178 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-23 00:02:30.918266 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-23 00:02:30.918283 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-23 00:02:30.918288 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-23 00:02:30.918293 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-23 00:02:30.922253 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-23 00:02:30.929697 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-23 00:02:31.402857 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-23 00:02:31.407849 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-23 00:02:31.409698 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-23 00:02:31.414520 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-23 00:02:31.472604 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-23 00:02:31.478124 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-23 00:02:32.134496 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=02ab38c9-2bcb-4055-86df-eddc5ec85f33] 2026-03-23 00:02:32.140861 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-23 00:02:34.684882 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5] 2026-03-23 00:02:34.691616 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=c3b20d12-9473-438c-9aa2-c72737b9e6d0] 2026-03-23 00:02:34.694051 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-23 00:02:34.697825 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-23 00:02:34.715997 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=77dd2124-92bc-4f46-82be-f9b228a0677e] 2026-03-23 00:02:34.725580 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=59b4a83f-d9c4-4d19-8941-518108c7531d] 2026-03-23 00:02:34.734076 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-23 00:02:34.740604 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-23 00:02:34.744784 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=6d03a194-715d-49d1-b802-c824960a80c4] 2026-03-23 00:02:34.749551 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=1d2a1acf-b303-4df2-8937-2ee8f9bbf12f] 2026-03-23 00:02:34.756186 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-23 00:02:34.757571 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-23 00:02:34.760933 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=a6dc9e4a-bb14-4275-87ca-e10d4388766d] 2026-03-23 00:02:34.771658 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-23 00:02:34.774544 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=45f3458fc00a761566bd708bd468d5c6d9d5e205] 2026-03-23 00:02:34.782155 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-23 00:02:34.785166 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=7c66a1a7ceb5cb5e04a4fe8916c7d283273b2efe] 2026-03-23 00:02:34.790362 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-23 00:02:34.793765 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=0331d52b-cef6-4339-b12c-c63469d626c6] 2026-03-23 00:02:34.814188 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=ff498ee2-e745-4049-bce7-87b4610f4b76] 2026-03-23 00:02:35.510191 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=b9af591f-82bf-418b-a61f-cda533bff4cd] 2026-03-23 00:02:37.150487 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=1778257e-f515-45d8-881b-d6db60d9afdb] 2026-03-23 00:02:37.158211 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-23 00:02:38.204410 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=c3578134-7537-4e48-a12d-a1d3ec7adf49] 2026-03-23 00:02:38.255969 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=b9686d40-ed2c-40e1-8ef5-b5d90039fa5e] 2026-03-23 00:02:38.269625 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=53d97a78-52aa-4b6a-8314-cc73eaae2f37] 2026-03-23 00:02:38.344761 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=e5e55cca-5656-41b4-9a27-a4492511de93] 2026-03-23 00:02:38.360775 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=ab4bd864-28a4-4976-ae20-c7c9f16ccd15] 2026-03-23 00:02:38.391099 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=29411df8-7097-419c-8410-7d3b9e1926ff] 2026-03-23 00:02:40.425288 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=aa0ed0dd-3860-4c47-939a-f0044fecc51d] 2026-03-23 00:02:40.433515 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-23 00:02:40.433588 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-23 00:02:40.433597 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-23 00:02:40.721471 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=885883f3-bade-4e91-b3a0-2f84b80a2290] 2026-03-23 00:02:40.740273 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-23 00:02:40.743096 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-23 00:02:40.743929 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-23 00:02:40.744686 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-23 00:02:40.744814 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-23 00:02:40.748669 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-23 00:02:40.751095 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-23 00:02:40.751400 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-23 00:02:40.963027 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=fc0baf28-9d63-4df6-b30e-9d34da59afac] 2026-03-23 00:02:40.976135 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-23 00:02:41.309337 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=50293566-f9b6-4d04-9ded-02a709e617d7] 2026-03-23 00:02:41.318731 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-23 00:02:42.004930 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=b0ee0518-cf10-49a0-b487-7bb0210b923b] 2026-03-23 00:02:42.007204 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=f28fa15b-193f-453c-9370-b85a84f45546] 2026-03-23 00:02:42.013251 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-23 00:02:42.014093 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-23 00:02:42.174326 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=9db9b473-75b1-4692-a2b5-662b3674dc61] 2026-03-23 00:02:42.181255 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-23 00:02:42.266873 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=67ace1a1-530a-4ebf-b0e8-a0543aec58a8] 2026-03-23 00:02:42.276392 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=4553563c-ee35-47cc-9928-70ac9c97e77c] 2026-03-23 00:02:42.278739 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-23 00:02:42.280031 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-23 00:02:42.316781 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=8c073c5a-6d00-45f5-b6e2-c206958d0b73] 2026-03-23 00:02:42.325539 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-23 00:02:42.337095 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=27b1cd0a-838d-45d2-8f8e-246a058d7155] 2026-03-23 00:02:42.431887 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=bf097f5f-5471-427f-91ec-f5a198bde2f9] 2026-03-23 00:02:42.523685 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=cfc3460f-94e5-42f1-ad00-7b699865d0fb] 2026-03-23 00:02:42.769839 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=926b46ef-1f8a-489b-801d-5e66e3380133] 2026-03-23 00:02:42.931159 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=cf381d07-b342-445b-9ba3-04d84f8e89a4] 2026-03-23 00:02:42.933364 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=0f7c2b7f-9988-4611-b3f2-79ba6a5d72f3] 2026-03-23 00:02:43.133464 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=9156b073-d888-4835-bcd8-3b9444fa9d44] 2026-03-23 00:02:43.143653 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=372052dd-c7b8-4450-b257-6163748ede4a] 2026-03-23 00:02:43.230664 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=0628c3ef-e5cc-4791-9bea-589babacfa5c] 2026-03-23 00:02:44.291488 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=1be4cba0-280e-40fc-81be-d546c3b151e5] 2026-03-23 00:02:44.313074 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-23 00:02:44.333911 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-23 00:02:44.335141 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-23 00:02:44.337863 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-23 00:02:44.338946 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-23 00:02:44.352192 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-23 00:02:44.355950 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-23 00:02:47.102390 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=9c3769da-0324-43de-b1f6-31c4952db5a8] 2026-03-23 00:02:47.112715 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-23 00:02:47.118993 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-23 00:02:47.121880 | orchestrator | local_file.inventory: Creating... 2026-03-23 00:02:47.123387 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=353ea4d66196d8c49a1e858002d77e4add0144ac] 2026-03-23 00:02:47.124748 | orchestrator | local_file.inventory: Creation complete after 0s [id=c97b64c41ad3c97125cde17e304d709a2d875ce5] 2026-03-23 00:02:48.504068 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=9c3769da-0324-43de-b1f6-31c4952db5a8] 2026-03-23 00:02:54.338822 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-23 00:02:54.338956 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-23 00:02:54.340074 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-23 00:02:54.343330 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-23 00:02:54.355865 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-23 00:02:54.357141 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-23 00:03:04.347887 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-23 00:03:04.347981 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-23 00:03:04.347995 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-23 00:03:04.348010 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-23 00:03:04.356359 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-23 00:03:04.357645 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-23 00:03:14.348111 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-23 00:03:14.348323 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-23 00:03:14.348405 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-23 00:03:14.348481 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-23 00:03:14.357747 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-23 00:03:14.357891 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-23 00:03:24.357322 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-23 00:03:24.357415 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-23 00:03:24.357423 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-23 00:03:24.357436 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-23 00:03:24.358465 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-23 00:03:24.358509 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-23 00:03:25.483651 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=59002b88-b10a-4ea9-90f2-d628afb6d08a] 2026-03-23 00:03:34.364321 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-03-23 00:03:34.364472 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-23 00:03:34.364480 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-03-23 00:03:34.364486 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-03-23 00:03:34.364491 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-03-23 00:03:35.469700 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=8ab7fe4a-ebe3-4ca2-846d-ded126120337] 2026-03-23 00:03:44.364941 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-03-23 00:03:44.365107 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m0s elapsed] 2026-03-23 00:03:44.365135 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-03-23 00:03:44.365155 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-03-23 00:03:45.749673 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m2s [id=8cc0ec81-efb8-4570-9840-4cdcd99098fc] 2026-03-23 00:03:46.112616 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m2s [id=fed05ea8-0fdf-4366-a620-43c26c38747f] 2026-03-23 00:03:54.373128 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m10s elapsed] 2026-03-23 00:03:54.373254 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m10s elapsed] 2026-03-23 00:03:56.326610 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 1m12s [id=4b48c19a-bccf-4d82-83cd-5039f438c671] 2026-03-23 00:04:04.380318 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m20s elapsed] 2026-03-23 00:04:06.456148 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m22s [id=d2d985b6-4f2b-4951-ac95-18a1e53e9264] 2026-03-23 00:04:06.498701 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-23 00:04:06.500331 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-23 00:04:06.506609 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-23 00:04:06.513763 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-23 00:04:06.524795 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-23 00:04:06.525566 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-23 00:04:06.527216 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-23 00:04:06.530124 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-23 00:04:06.535215 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=3560241993288139717] 2026-03-23 00:04:06.545641 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-23 00:04:06.549084 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-23 00:04:06.581113 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-23 00:04:10.090496 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=8ab7fe4a-ebe3-4ca2-846d-ded126120337/56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5] 2026-03-23 00:04:10.097438 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=8cc0ec81-efb8-4570-9840-4cdcd99098fc/a6dc9e4a-bb14-4275-87ca-e10d4388766d] 2026-03-23 00:04:10.234455 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=d2d985b6-4f2b-4951-ac95-18a1e53e9264/6d03a194-715d-49d1-b802-c824960a80c4] 2026-03-23 00:04:16.325083 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=8ab7fe4a-ebe3-4ca2-846d-ded126120337/0331d52b-cef6-4339-b12c-c63469d626c6] 2026-03-23 00:04:16.334253 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 9s [id=8cc0ec81-efb8-4570-9840-4cdcd99098fc/ff498ee2-e745-4049-bce7-87b4610f4b76] 2026-03-23 00:04:16.351841 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=d2d985b6-4f2b-4951-ac95-18a1e53e9264/c3b20d12-9473-438c-9aa2-c72737b9e6d0] 2026-03-23 00:04:16.362994 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=8ab7fe4a-ebe3-4ca2-846d-ded126120337/77dd2124-92bc-4f46-82be-f9b228a0677e] 2026-03-23 00:04:16.389865 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 9s [id=d2d985b6-4f2b-4951-ac95-18a1e53e9264/1d2a1acf-b303-4df2-8937-2ee8f9bbf12f] 2026-03-23 00:04:16.416782 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=8cc0ec81-efb8-4570-9840-4cdcd99098fc/59b4a83f-d9c4-4d19-8941-518108c7531d] 2026-03-23 00:04:16.585437 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-23 00:04:26.585708 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-23 00:04:27.264308 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=87869439-a7ac-4d33-b60e-4876f41ca914] 2026-03-23 00:04:27.280547 | orchestrator | 2026-03-23 00:04:27.280619 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-23 00:04:27.280625 | orchestrator | 2026-03-23 00:04:27.280630 | orchestrator | Outputs: 2026-03-23 00:04:27.280635 | orchestrator | 2026-03-23 00:04:27.280639 | orchestrator | manager_address = 2026-03-23 00:04:27.280643 | orchestrator | private_key = 2026-03-23 00:04:27.684170 | orchestrator | ok: Runtime: 0:02:02.393983 2026-03-23 00:04:27.705773 | 2026-03-23 00:04:27.705917 | TASK [Create infrastructure (stable)] 2026-03-23 00:04:28.276072 | orchestrator | skipping: Conditional result was False 2026-03-23 00:04:28.294026 | 2026-03-23 00:04:28.294204 | TASK [Fetch manager address] 2026-03-23 00:04:28.747060 | orchestrator | ok 2026-03-23 00:04:28.754662 | 2026-03-23 00:04:28.754786 | TASK [Set manager_host address] 2026-03-23 00:04:28.843896 | orchestrator | ok 2026-03-23 00:04:28.853260 | 2026-03-23 00:04:28.853459 | LOOP [Update ansible collections] 2026-03-23 00:04:29.956812 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-23 00:04:29.957324 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-23 00:04:29.957429 | orchestrator | Starting galaxy collection install process 2026-03-23 00:04:29.957457 | orchestrator | Process install dependency map 2026-03-23 00:04:29.957479 | orchestrator | Starting collection install process 2026-03-23 00:04:29.957500 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-03-23 00:04:29.957526 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-03-23 00:04:29.957559 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-23 00:04:29.957621 | orchestrator | ok: Item: commons Runtime: 0:00:00.756489 2026-03-23 00:04:31.036755 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-23 00:04:31.036970 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-23 00:04:31.037025 | orchestrator | Starting galaxy collection install process 2026-03-23 00:04:31.037066 | orchestrator | Process install dependency map 2026-03-23 00:04:31.037104 | orchestrator | Starting collection install process 2026-03-23 00:04:31.037138 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-03-23 00:04:31.037173 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-03-23 00:04:31.037207 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-23 00:04:31.037262 | orchestrator | ok: Item: services Runtime: 0:00:00.780336 2026-03-23 00:04:31.067110 | 2026-03-23 00:04:31.067335 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-23 00:04:42.703240 | orchestrator | ok 2026-03-23 00:04:42.715296 | 2026-03-23 00:04:42.715492 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-23 00:05:42.761160 | orchestrator | ok 2026-03-23 00:05:42.775702 | 2026-03-23 00:05:42.775848 | TASK [Fetch manager ssh hostkey] 2026-03-23 00:05:44.366633 | orchestrator | Output suppressed because no_log was given 2026-03-23 00:05:44.382590 | 2026-03-23 00:05:44.382805 | TASK [Get ssh keypair from terraform environment] 2026-03-23 00:05:44.931046 | orchestrator | ok: Runtime: 0:00:00.007956 2026-03-23 00:05:44.949457 | 2026-03-23 00:05:44.949622 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-23 00:05:45.000756 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-23 00:05:45.013063 | 2026-03-23 00:05:45.013211 | TASK [Run manager part 0] 2026-03-23 00:05:46.045668 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-23 00:05:46.103007 | orchestrator | 2026-03-23 00:05:46.103057 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-23 00:05:46.103064 | orchestrator | 2026-03-23 00:05:46.103077 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-23 00:05:47.965637 | orchestrator | ok: [testbed-manager] 2026-03-23 00:05:47.965743 | orchestrator | 2026-03-23 00:05:47.965779 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-23 00:05:47.965792 | orchestrator | 2026-03-23 00:05:47.965804 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-23 00:05:50.234148 | orchestrator | ok: [testbed-manager] 2026-03-23 00:05:50.234212 | orchestrator | 2026-03-23 00:05:50.234234 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-23 00:05:50.871059 | orchestrator | ok: [testbed-manager] 2026-03-23 00:05:50.871131 | orchestrator | 2026-03-23 00:05:50.871198 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-23 00:05:50.918447 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:05:50.918522 | orchestrator | 2026-03-23 00:05:50.918541 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-23 00:05:50.955064 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:05:50.955132 | orchestrator | 2026-03-23 00:05:50.955144 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-23 00:05:50.986782 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:05:50.986830 | orchestrator | 2026-03-23 00:05:50.986836 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-23 00:05:51.017450 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:05:51.017508 | orchestrator | 2026-03-23 00:05:51.017517 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-23 00:05:51.048947 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:05:51.049001 | orchestrator | 2026-03-23 00:05:51.049012 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-23 00:05:51.093520 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:05:51.093574 | orchestrator | 2026-03-23 00:05:51.093587 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-23 00:05:51.140061 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:05:51.140119 | orchestrator | 2026-03-23 00:05:51.140132 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-23 00:05:52.000613 | orchestrator | changed: [testbed-manager] 2026-03-23 00:05:52.000668 | orchestrator | 2026-03-23 00:05:52.000694 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-23 00:09:12.990185 | orchestrator | changed: [testbed-manager] 2026-03-23 00:09:12.990296 | orchestrator | 2026-03-23 00:09:12.990323 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-23 00:10:28.753758 | orchestrator | changed: [testbed-manager] 2026-03-23 00:10:28.753860 | orchestrator | 2026-03-23 00:10:28.753876 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-23 00:10:52.137212 | orchestrator | changed: [testbed-manager] 2026-03-23 00:10:52.137304 | orchestrator | 2026-03-23 00:10:52.137322 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-23 00:11:00.664933 | orchestrator | changed: [testbed-manager] 2026-03-23 00:11:00.665227 | orchestrator | 2026-03-23 00:11:00.665265 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-23 00:11:00.709517 | orchestrator | ok: [testbed-manager] 2026-03-23 00:11:00.709601 | orchestrator | 2026-03-23 00:11:00.709618 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-23 00:11:01.505587 | orchestrator | ok: [testbed-manager] 2026-03-23 00:11:01.505675 | orchestrator | 2026-03-23 00:11:01.505848 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-23 00:11:03.258003 | orchestrator | changed: [testbed-manager] 2026-03-23 00:11:03.258136 | orchestrator | 2026-03-23 00:11:03.258153 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-23 00:11:09.216645 | orchestrator | changed: [testbed-manager] 2026-03-23 00:11:09.216683 | orchestrator | 2026-03-23 00:11:09.216703 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-23 00:11:14.818202 | orchestrator | changed: [testbed-manager] 2026-03-23 00:11:14.818294 | orchestrator | 2026-03-23 00:11:14.818307 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-23 00:11:17.300531 | orchestrator | changed: [testbed-manager] 2026-03-23 00:11:17.300570 | orchestrator | 2026-03-23 00:11:17.300576 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-23 00:11:19.022012 | orchestrator | changed: [testbed-manager] 2026-03-23 00:11:19.022127 | orchestrator | 2026-03-23 00:11:19.022143 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-23 00:11:20.128942 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-23 00:11:20.128984 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-23 00:11:20.128990 | orchestrator | 2026-03-23 00:11:20.128994 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-23 00:11:20.174769 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-23 00:11:20.174832 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-23 00:11:20.174842 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-23 00:11:20.174851 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-23 00:11:23.407003 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-23 00:11:23.407041 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-23 00:11:23.407046 | orchestrator | 2026-03-23 00:11:23.407052 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-23 00:11:23.959647 | orchestrator | changed: [testbed-manager] 2026-03-23 00:11:23.959700 | orchestrator | 2026-03-23 00:11:23.959708 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-23 00:13:44.591369 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-23 00:13:44.591642 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-23 00:13:44.591665 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-23 00:13:44.591679 | orchestrator | 2026-03-23 00:13:44.591691 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-23 00:13:46.849911 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-23 00:13:46.849949 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-23 00:13:46.849954 | orchestrator | 2026-03-23 00:13:46.849959 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-23 00:13:46.849964 | orchestrator | 2026-03-23 00:13:46.849968 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-23 00:13:48.197656 | orchestrator | ok: [testbed-manager] 2026-03-23 00:13:48.197814 | orchestrator | 2026-03-23 00:13:48.197823 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-23 00:13:48.238461 | orchestrator | ok: [testbed-manager] 2026-03-23 00:13:48.238501 | orchestrator | 2026-03-23 00:13:48.238509 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-23 00:13:48.332814 | orchestrator | ok: [testbed-manager] 2026-03-23 00:13:48.333045 | orchestrator | 2026-03-23 00:13:48.333059 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-23 00:13:49.132101 | orchestrator | changed: [testbed-manager] 2026-03-23 00:13:49.132156 | orchestrator | 2026-03-23 00:13:49.132163 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-23 00:13:49.846324 | orchestrator | changed: [testbed-manager] 2026-03-23 00:13:49.846427 | orchestrator | 2026-03-23 00:13:49.846445 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-23 00:13:51.159383 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-23 00:13:51.159420 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-23 00:13:51.159426 | orchestrator | 2026-03-23 00:13:51.159439 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-23 00:13:52.563432 | orchestrator | changed: [testbed-manager] 2026-03-23 00:13:52.563482 | orchestrator | 2026-03-23 00:13:52.563490 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-23 00:13:54.267990 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-23 00:13:54.268034 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-23 00:13:54.268046 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-23 00:13:54.268057 | orchestrator | 2026-03-23 00:13:54.268070 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-23 00:13:54.330472 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:13:54.330527 | orchestrator | 2026-03-23 00:13:54.330537 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-23 00:13:54.400321 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:13:54.400367 | orchestrator | 2026-03-23 00:13:54.400374 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-23 00:13:54.954560 | orchestrator | changed: [testbed-manager] 2026-03-23 00:13:54.954606 | orchestrator | 2026-03-23 00:13:54.954615 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-23 00:13:55.028798 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:13:55.028872 | orchestrator | 2026-03-23 00:13:55.028888 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-23 00:13:55.903166 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-23 00:13:55.903205 | orchestrator | changed: [testbed-manager] 2026-03-23 00:13:55.903212 | orchestrator | 2026-03-23 00:13:55.903218 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-23 00:13:55.944113 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:13:55.944155 | orchestrator | 2026-03-23 00:13:55.944162 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-23 00:13:55.984012 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:13:55.984096 | orchestrator | 2026-03-23 00:13:55.984120 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-23 00:13:56.026355 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:13:56.026397 | orchestrator | 2026-03-23 00:13:56.026405 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-23 00:13:56.113079 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:13:56.113119 | orchestrator | 2026-03-23 00:13:56.113127 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-23 00:13:56.826152 | orchestrator | ok: [testbed-manager] 2026-03-23 00:13:56.826185 | orchestrator | 2026-03-23 00:13:56.826191 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-23 00:13:56.826196 | orchestrator | 2026-03-23 00:13:56.826200 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-23 00:13:58.180944 | orchestrator | ok: [testbed-manager] 2026-03-23 00:13:58.180985 | orchestrator | 2026-03-23 00:13:58.180992 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-23 00:13:59.152104 | orchestrator | changed: [testbed-manager] 2026-03-23 00:13:59.152140 | orchestrator | 2026-03-23 00:13:59.152146 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:13:59.152152 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-23 00:13:59.152157 | orchestrator | 2026-03-23 00:13:59.930829 | orchestrator | ok: Runtime: 0:08:14.509713 2026-03-23 00:13:59.942934 | 2026-03-23 00:13:59.943066 | TASK [Point out that the log in on the manager is now possible] 2026-03-23 00:13:59.979220 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-23 00:13:59.988474 | 2026-03-23 00:13:59.988591 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-23 00:14:00.027078 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-23 00:14:00.036823 | 2026-03-23 00:14:00.037027 | TASK [Run manager part 1 + 2] 2026-03-23 00:14:00.890292 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-23 00:14:00.948701 | orchestrator | 2026-03-23 00:14:00.948800 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-23 00:14:00.948819 | orchestrator | 2026-03-23 00:14:00.948849 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-23 00:14:03.766240 | orchestrator | ok: [testbed-manager] 2026-03-23 00:14:03.766561 | orchestrator | 2026-03-23 00:14:03.766589 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-23 00:14:03.810141 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:14:03.810187 | orchestrator | 2026-03-23 00:14:03.810195 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-23 00:14:03.846467 | orchestrator | ok: [testbed-manager] 2026-03-23 00:14:03.846512 | orchestrator | 2026-03-23 00:14:03.846519 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-23 00:14:03.895863 | orchestrator | ok: [testbed-manager] 2026-03-23 00:14:03.895915 | orchestrator | 2026-03-23 00:14:03.895925 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-23 00:14:03.959860 | orchestrator | ok: [testbed-manager] 2026-03-23 00:14:03.959988 | orchestrator | 2026-03-23 00:14:03.960002 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-23 00:14:04.028237 | orchestrator | ok: [testbed-manager] 2026-03-23 00:14:04.028363 | orchestrator | 2026-03-23 00:14:04.028400 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-23 00:14:04.083849 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-23 00:14:04.083940 | orchestrator | 2026-03-23 00:14:04.083957 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-23 00:14:04.781034 | orchestrator | ok: [testbed-manager] 2026-03-23 00:14:04.781113 | orchestrator | 2026-03-23 00:14:04.781130 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-23 00:14:04.832916 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:14:04.832978 | orchestrator | 2026-03-23 00:14:04.832988 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-23 00:14:06.145959 | orchestrator | changed: [testbed-manager] 2026-03-23 00:14:06.146030 | orchestrator | 2026-03-23 00:14:06.146041 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-23 00:14:06.658216 | orchestrator | ok: [testbed-manager] 2026-03-23 00:14:06.658273 | orchestrator | 2026-03-23 00:14:06.658281 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-23 00:14:07.767005 | orchestrator | changed: [testbed-manager] 2026-03-23 00:14:07.767065 | orchestrator | 2026-03-23 00:14:07.767075 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-23 00:14:21.914995 | orchestrator | changed: [testbed-manager] 2026-03-23 00:14:21.915053 | orchestrator | 2026-03-23 00:14:21.915062 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-23 00:14:22.543866 | orchestrator | ok: [testbed-manager] 2026-03-23 00:14:22.543925 | orchestrator | 2026-03-23 00:14:22.543936 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-23 00:14:22.596851 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:14:22.597045 | orchestrator | 2026-03-23 00:14:22.597065 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-23 00:14:23.487644 | orchestrator | changed: [testbed-manager] 2026-03-23 00:14:23.487690 | orchestrator | 2026-03-23 00:14:23.487700 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-23 00:14:24.423907 | orchestrator | changed: [testbed-manager] 2026-03-23 00:14:24.423974 | orchestrator | 2026-03-23 00:14:24.423984 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-23 00:14:24.968868 | orchestrator | changed: [testbed-manager] 2026-03-23 00:14:24.968942 | orchestrator | 2026-03-23 00:14:24.968954 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-23 00:14:25.013289 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-23 00:14:25.013352 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-23 00:14:25.013358 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-23 00:14:25.013363 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-23 00:14:27.089230 | orchestrator | changed: [testbed-manager] 2026-03-23 00:14:27.089309 | orchestrator | 2026-03-23 00:14:27.089320 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-23 00:14:35.931462 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-23 00:14:36.211532 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-23 00:14:36.211592 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-23 00:14:36.211631 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-23 00:14:36.211651 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-23 00:14:36.211661 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-23 00:14:36.211672 | orchestrator | 2026-03-23 00:14:36.211683 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-23 00:14:37.160333 | orchestrator | changed: [testbed-manager] 2026-03-23 00:14:37.160389 | orchestrator | 2026-03-23 00:14:37.160401 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-23 00:14:37.207160 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:14:37.207245 | orchestrator | 2026-03-23 00:14:37.207261 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-23 00:14:40.272844 | orchestrator | changed: [testbed-manager] 2026-03-23 00:14:40.272887 | orchestrator | 2026-03-23 00:14:40.272894 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-23 00:14:40.316639 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:14:40.316677 | orchestrator | 2026-03-23 00:14:40.316685 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-23 00:16:11.858288 | orchestrator | changed: [testbed-manager] 2026-03-23 00:16:11.858377 | orchestrator | 2026-03-23 00:16:11.858395 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-23 00:16:12.996271 | orchestrator | ok: [testbed-manager] 2026-03-23 00:16:12.996312 | orchestrator | 2026-03-23 00:16:12.996320 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:16:12.996328 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-23 00:16:12.996334 | orchestrator | 2026-03-23 00:16:13.275587 | orchestrator | ok: Runtime: 0:02:12.757971 2026-03-23 00:16:13.295547 | 2026-03-23 00:16:13.295975 | TASK [Reboot manager] 2026-03-23 00:16:14.845636 | orchestrator | ok: Runtime: 0:00:00.964947 2026-03-23 00:16:14.861068 | 2026-03-23 00:16:14.861201 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-23 00:16:29.232624 | orchestrator | ok 2026-03-23 00:16:29.243293 | 2026-03-23 00:16:29.243446 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-23 00:17:29.279406 | orchestrator | ok 2026-03-23 00:17:29.286194 | 2026-03-23 00:17:29.286304 | TASK [Deploy manager + bootstrap nodes] 2026-03-23 00:17:31.685784 | orchestrator | 2026-03-23 00:17:31.685916 | orchestrator | # DEPLOY MANAGER 2026-03-23 00:17:31.685932 | orchestrator | 2026-03-23 00:17:31.685944 | orchestrator | + set -e 2026-03-23 00:17:31.685955 | orchestrator | + echo 2026-03-23 00:17:31.685967 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-23 00:17:31.685981 | orchestrator | + echo 2026-03-23 00:17:31.686076 | orchestrator | + cat /opt/manager-vars.sh 2026-03-23 00:17:31.689266 | orchestrator | export NUMBER_OF_NODES=6 2026-03-23 00:17:31.689286 | orchestrator | 2026-03-23 00:17:31.689322 | orchestrator | export CEPH_VERSION=reef 2026-03-23 00:17:31.689334 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-23 00:17:31.689345 | orchestrator | export MANAGER_VERSION=latest 2026-03-23 00:17:31.689362 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-23 00:17:31.689371 | orchestrator | 2026-03-23 00:17:31.689386 | orchestrator | export ARA=false 2026-03-23 00:17:31.689395 | orchestrator | export DEPLOY_MODE=manager 2026-03-23 00:17:31.689410 | orchestrator | export TEMPEST=true 2026-03-23 00:17:31.689419 | orchestrator | export IS_ZUUL=true 2026-03-23 00:17:31.689428 | orchestrator | 2026-03-23 00:17:31.689443 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.169 2026-03-23 00:17:31.689452 | orchestrator | export EXTERNAL_API=false 2026-03-23 00:17:31.689461 | orchestrator | 2026-03-23 00:17:31.689470 | orchestrator | export IMAGE_USER=ubuntu 2026-03-23 00:17:31.689482 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-23 00:17:31.689491 | orchestrator | 2026-03-23 00:17:31.689500 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-23 00:17:31.689513 | orchestrator | 2026-03-23 00:17:31.689523 | orchestrator | + echo 2026-03-23 00:17:31.689532 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-23 00:17:31.690519 | orchestrator | ++ export INTERACTIVE=false 2026-03-23 00:17:31.690536 | orchestrator | ++ INTERACTIVE=false 2026-03-23 00:17:31.690547 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-23 00:17:31.690557 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-23 00:17:31.690565 | orchestrator | + source /opt/manager-vars.sh 2026-03-23 00:17:31.690613 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-23 00:17:31.690623 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-23 00:17:31.690709 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-23 00:17:31.690732 | orchestrator | ++ CEPH_VERSION=reef 2026-03-23 00:17:31.690747 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-23 00:17:31.690764 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-23 00:17:31.690779 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-23 00:17:31.690789 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-23 00:17:31.690797 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-23 00:17:31.690814 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-23 00:17:31.690824 | orchestrator | ++ export ARA=false 2026-03-23 00:17:31.690832 | orchestrator | ++ ARA=false 2026-03-23 00:17:31.690842 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-23 00:17:31.690851 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-23 00:17:31.690859 | orchestrator | ++ export TEMPEST=true 2026-03-23 00:17:31.690868 | orchestrator | ++ TEMPEST=true 2026-03-23 00:17:31.690877 | orchestrator | ++ export IS_ZUUL=true 2026-03-23 00:17:31.690886 | orchestrator | ++ IS_ZUUL=true 2026-03-23 00:17:31.690894 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.169 2026-03-23 00:17:31.690904 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.169 2026-03-23 00:17:31.690912 | orchestrator | ++ export EXTERNAL_API=false 2026-03-23 00:17:31.690921 | orchestrator | ++ EXTERNAL_API=false 2026-03-23 00:17:31.690930 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-23 00:17:31.690939 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-23 00:17:31.690948 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-23 00:17:31.690956 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-23 00:17:31.690965 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-23 00:17:31.690978 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-23 00:17:31.691010 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-23 00:17:31.748475 | orchestrator | + docker version 2026-03-23 00:17:31.855696 | orchestrator | Client: Docker Engine - Community 2026-03-23 00:17:31.855799 | orchestrator | Version: 27.5.1 2026-03-23 00:17:31.855816 | orchestrator | API version: 1.47 2026-03-23 00:17:31.855831 | orchestrator | Go version: go1.22.11 2026-03-23 00:17:31.855842 | orchestrator | Git commit: 9f9e405 2026-03-23 00:17:31.855854 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-23 00:17:31.855866 | orchestrator | OS/Arch: linux/amd64 2026-03-23 00:17:31.855877 | orchestrator | Context: default 2026-03-23 00:17:31.855888 | orchestrator | 2026-03-23 00:17:31.855900 | orchestrator | Server: Docker Engine - Community 2026-03-23 00:17:31.855911 | orchestrator | Engine: 2026-03-23 00:17:31.855922 | orchestrator | Version: 27.5.1 2026-03-23 00:17:31.855934 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-23 00:17:31.855976 | orchestrator | Go version: go1.22.11 2026-03-23 00:17:31.856020 | orchestrator | Git commit: 4c9b3b0 2026-03-23 00:17:31.856032 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-23 00:17:31.856043 | orchestrator | OS/Arch: linux/amd64 2026-03-23 00:17:31.856054 | orchestrator | Experimental: false 2026-03-23 00:17:31.856065 | orchestrator | containerd: 2026-03-23 00:17:31.856076 | orchestrator | Version: v2.2.2 2026-03-23 00:17:31.856088 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-23 00:17:31.856100 | orchestrator | runc: 2026-03-23 00:17:31.856111 | orchestrator | Version: 1.3.4 2026-03-23 00:17:31.856138 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-23 00:17:31.856150 | orchestrator | docker-init: 2026-03-23 00:17:31.856161 | orchestrator | Version: 0.19.0 2026-03-23 00:17:31.856173 | orchestrator | GitCommit: de40ad0 2026-03-23 00:17:31.859083 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-23 00:17:31.866490 | orchestrator | + set -e 2026-03-23 00:17:31.866609 | orchestrator | + source /opt/manager-vars.sh 2026-03-23 00:17:31.866637 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-23 00:17:31.866661 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-23 00:17:31.866681 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-23 00:17:31.866719 | orchestrator | ++ CEPH_VERSION=reef 2026-03-23 00:17:31.866731 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-23 00:17:31.866744 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-23 00:17:31.866755 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-23 00:17:31.866767 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-23 00:17:31.866777 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-23 00:17:31.866788 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-23 00:17:31.866799 | orchestrator | ++ export ARA=false 2026-03-23 00:17:31.866811 | orchestrator | ++ ARA=false 2026-03-23 00:17:31.866822 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-23 00:17:31.866835 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-23 00:17:31.866846 | orchestrator | ++ export TEMPEST=true 2026-03-23 00:17:31.866857 | orchestrator | ++ TEMPEST=true 2026-03-23 00:17:31.866868 | orchestrator | ++ export IS_ZUUL=true 2026-03-23 00:17:31.866879 | orchestrator | ++ IS_ZUUL=true 2026-03-23 00:17:31.866890 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.169 2026-03-23 00:17:31.866901 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.169 2026-03-23 00:17:31.866912 | orchestrator | ++ export EXTERNAL_API=false 2026-03-23 00:17:31.866923 | orchestrator | ++ EXTERNAL_API=false 2026-03-23 00:17:31.866934 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-23 00:17:31.866944 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-23 00:17:31.866955 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-23 00:17:31.866966 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-23 00:17:31.866977 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-23 00:17:31.867011 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-23 00:17:31.867024 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-23 00:17:31.867035 | orchestrator | ++ export INTERACTIVE=false 2026-03-23 00:17:31.867045 | orchestrator | ++ INTERACTIVE=false 2026-03-23 00:17:31.867056 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-23 00:17:31.867072 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-23 00:17:31.867095 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-23 00:17:31.867106 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-23 00:17:31.867118 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-23 00:17:31.873098 | orchestrator | + set -e 2026-03-23 00:17:31.873166 | orchestrator | + VERSION=reef 2026-03-23 00:17:31.873971 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-23 00:17:31.879715 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-23 00:17:31.879759 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-23 00:17:31.885309 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-23 00:17:31.891130 | orchestrator | + set -e 2026-03-23 00:17:31.891528 | orchestrator | + VERSION=2024.2 2026-03-23 00:17:31.892034 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-23 00:17:31.895872 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-23 00:17:31.895920 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-23 00:17:31.900455 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-23 00:17:31.901241 | orchestrator | ++ semver latest 7.0.0 2026-03-23 00:17:31.960290 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-23 00:17:31.960351 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-23 00:17:31.960361 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-23 00:17:31.961300 | orchestrator | ++ semver latest 10.0.0-0 2026-03-23 00:17:32.018195 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-23 00:17:32.018643 | orchestrator | ++ semver 2024.2 2025.1 2026-03-23 00:17:32.073593 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-23 00:17:32.073687 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-23 00:17:32.165689 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-23 00:17:32.166724 | orchestrator | + source /opt/venv/bin/activate 2026-03-23 00:17:32.167959 | orchestrator | ++ deactivate nondestructive 2026-03-23 00:17:32.168016 | orchestrator | ++ '[' -n '' ']' 2026-03-23 00:17:32.168041 | orchestrator | ++ '[' -n '' ']' 2026-03-23 00:17:32.168054 | orchestrator | ++ hash -r 2026-03-23 00:17:32.168065 | orchestrator | ++ '[' -n '' ']' 2026-03-23 00:17:32.168076 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-23 00:17:32.168088 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-23 00:17:32.168112 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-23 00:17:32.168131 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-23 00:17:32.168142 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-23 00:17:32.168153 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-23 00:17:32.168164 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-23 00:17:32.168177 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-23 00:17:32.168213 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-23 00:17:32.168225 | orchestrator | ++ export PATH 2026-03-23 00:17:32.168240 | orchestrator | ++ '[' -n '' ']' 2026-03-23 00:17:32.168344 | orchestrator | ++ '[' -z '' ']' 2026-03-23 00:17:32.168361 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-23 00:17:32.168437 | orchestrator | ++ PS1='(venv) ' 2026-03-23 00:17:32.168452 | orchestrator | ++ export PS1 2026-03-23 00:17:32.168464 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-23 00:17:32.168476 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-23 00:17:32.168491 | orchestrator | ++ hash -r 2026-03-23 00:17:32.168518 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-23 00:17:33.301293 | orchestrator | 2026-03-23 00:17:33.301397 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-23 00:17:33.301413 | orchestrator | 2026-03-23 00:17:33.301425 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-23 00:17:33.851057 | orchestrator | ok: [testbed-manager] 2026-03-23 00:17:33.851149 | orchestrator | 2026-03-23 00:17:33.851163 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-23 00:17:34.794104 | orchestrator | changed: [testbed-manager] 2026-03-23 00:17:34.794199 | orchestrator | 2026-03-23 00:17:34.794215 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-23 00:17:34.794228 | orchestrator | 2026-03-23 00:17:34.794240 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-23 00:17:38.187311 | orchestrator | ok: [testbed-manager] 2026-03-23 00:17:38.187423 | orchestrator | 2026-03-23 00:17:38.187437 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-23 00:17:38.244446 | orchestrator | ok: [testbed-manager] 2026-03-23 00:17:38.244562 | orchestrator | 2026-03-23 00:17:38.244592 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-23 00:17:38.701043 | orchestrator | changed: [testbed-manager] 2026-03-23 00:17:38.701150 | orchestrator | 2026-03-23 00:17:38.701166 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-23 00:17:38.736486 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:17:38.736566 | orchestrator | 2026-03-23 00:17:38.736580 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-23 00:17:39.078820 | orchestrator | changed: [testbed-manager] 2026-03-23 00:17:39.078944 | orchestrator | 2026-03-23 00:17:39.078969 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-23 00:17:39.402667 | orchestrator | ok: [testbed-manager] 2026-03-23 00:17:39.402771 | orchestrator | 2026-03-23 00:17:39.402787 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-23 00:17:39.517569 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:17:39.517667 | orchestrator | 2026-03-23 00:17:39.517684 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-23 00:17:39.517695 | orchestrator | 2026-03-23 00:17:39.517706 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-23 00:17:41.260923 | orchestrator | ok: [testbed-manager] 2026-03-23 00:17:41.261049 | orchestrator | 2026-03-23 00:17:41.261069 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-23 00:17:41.355673 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-23 00:17:41.355775 | orchestrator | 2026-03-23 00:17:41.355794 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-23 00:17:41.409929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-23 00:17:41.410114 | orchestrator | 2026-03-23 00:17:41.410138 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-23 00:17:42.494496 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-23 00:17:42.494617 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-23 00:17:42.494635 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-23 00:17:42.494647 | orchestrator | 2026-03-23 00:17:42.494660 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-23 00:17:44.234180 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-23 00:17:44.234269 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-23 00:17:44.234283 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-23 00:17:44.234295 | orchestrator | 2026-03-23 00:17:44.234306 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-23 00:17:44.865755 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-23 00:17:44.865849 | orchestrator | changed: [testbed-manager] 2026-03-23 00:17:44.865866 | orchestrator | 2026-03-23 00:17:44.865880 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-23 00:17:45.472387 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-23 00:17:45.472473 | orchestrator | changed: [testbed-manager] 2026-03-23 00:17:45.472490 | orchestrator | 2026-03-23 00:17:45.472503 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-23 00:17:45.530673 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:17:45.530757 | orchestrator | 2026-03-23 00:17:45.530772 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-23 00:17:45.894200 | orchestrator | ok: [testbed-manager] 2026-03-23 00:17:45.894295 | orchestrator | 2026-03-23 00:17:45.894313 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-23 00:17:45.963801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-23 00:17:45.963925 | orchestrator | 2026-03-23 00:17:45.963943 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-23 00:17:47.021163 | orchestrator | changed: [testbed-manager] 2026-03-23 00:17:47.021258 | orchestrator | 2026-03-23 00:17:47.021274 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-23 00:17:47.783924 | orchestrator | changed: [testbed-manager] 2026-03-23 00:17:47.784025 | orchestrator | 2026-03-23 00:17:47.784076 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-23 00:18:03.011478 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:03.011671 | orchestrator | 2026-03-23 00:18:03.011723 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-23 00:18:03.067215 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:18:03.067323 | orchestrator | 2026-03-23 00:18:03.067341 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-23 00:18:03.067354 | orchestrator | 2026-03-23 00:18:03.067366 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-23 00:18:04.875191 | orchestrator | ok: [testbed-manager] 2026-03-23 00:18:04.875286 | orchestrator | 2026-03-23 00:18:04.875326 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-23 00:18:04.977451 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-23 00:18:04.977549 | orchestrator | 2026-03-23 00:18:04.977564 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-23 00:18:05.035167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-23 00:18:05.035258 | orchestrator | 2026-03-23 00:18:05.035274 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-23 00:18:07.586969 | orchestrator | ok: [testbed-manager] 2026-03-23 00:18:07.587068 | orchestrator | 2026-03-23 00:18:07.587086 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-23 00:18:07.643960 | orchestrator | ok: [testbed-manager] 2026-03-23 00:18:07.644069 | orchestrator | 2026-03-23 00:18:07.644092 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-23 00:18:07.783822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-23 00:18:07.783915 | orchestrator | 2026-03-23 00:18:07.783934 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-23 00:18:10.545476 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-23 00:18:10.545567 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-23 00:18:10.545581 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-23 00:18:10.545593 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-23 00:18:10.545604 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-23 00:18:10.545615 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-23 00:18:10.545626 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-23 00:18:10.545638 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-23 00:18:10.545649 | orchestrator | 2026-03-23 00:18:10.545662 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-23 00:18:11.183575 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:11.183668 | orchestrator | 2026-03-23 00:18:11.183684 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-23 00:18:11.809990 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:11.810232 | orchestrator | 2026-03-23 00:18:11.810258 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-23 00:18:11.892426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-23 00:18:11.892526 | orchestrator | 2026-03-23 00:18:11.892541 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-23 00:18:13.115835 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-23 00:18:13.115956 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-23 00:18:13.115972 | orchestrator | 2026-03-23 00:18:13.115985 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-23 00:18:13.710979 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:13.711091 | orchestrator | 2026-03-23 00:18:13.711119 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-23 00:18:13.756497 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:18:13.756609 | orchestrator | 2026-03-23 00:18:13.756631 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-23 00:18:13.832036 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-23 00:18:13.832123 | orchestrator | 2026-03-23 00:18:13.832183 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-23 00:18:14.457865 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:14.457970 | orchestrator | 2026-03-23 00:18:14.457988 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-23 00:18:14.511445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-23 00:18:14.511545 | orchestrator | 2026-03-23 00:18:14.511554 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-23 00:18:15.850806 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-23 00:18:15.850881 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-23 00:18:15.850895 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:15.850915 | orchestrator | 2026-03-23 00:18:15.850935 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-23 00:18:16.508951 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:16.509043 | orchestrator | 2026-03-23 00:18:16.509059 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-23 00:18:16.559353 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:18:16.559459 | orchestrator | 2026-03-23 00:18:16.559473 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-23 00:18:16.651572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-23 00:18:16.651687 | orchestrator | 2026-03-23 00:18:16.651713 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-23 00:18:17.182672 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:17.182775 | orchestrator | 2026-03-23 00:18:17.182816 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-23 00:18:17.579607 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:17.579727 | orchestrator | 2026-03-23 00:18:17.579744 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-23 00:18:18.885696 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-23 00:18:18.885801 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-23 00:18:18.885817 | orchestrator | 2026-03-23 00:18:18.885831 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-23 00:18:19.536565 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:19.536666 | orchestrator | 2026-03-23 00:18:19.536683 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-23 00:18:19.902979 | orchestrator | ok: [testbed-manager] 2026-03-23 00:18:19.903104 | orchestrator | 2026-03-23 00:18:19.903129 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-23 00:18:20.261052 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:20.261186 | orchestrator | 2026-03-23 00:18:20.261206 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-23 00:18:20.303453 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:18:20.303540 | orchestrator | 2026-03-23 00:18:20.303554 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-23 00:18:20.365636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-23 00:18:20.365700 | orchestrator | 2026-03-23 00:18:20.365708 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-23 00:18:20.406694 | orchestrator | ok: [testbed-manager] 2026-03-23 00:18:20.406768 | orchestrator | 2026-03-23 00:18:20.406780 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-23 00:18:22.359516 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-23 00:18:22.359653 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-23 00:18:22.359679 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-23 00:18:22.359699 | orchestrator | 2026-03-23 00:18:22.359720 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-23 00:18:23.046988 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:23.047087 | orchestrator | 2026-03-23 00:18:23.047104 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-23 00:18:23.720598 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:23.720691 | orchestrator | 2026-03-23 00:18:23.720705 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-23 00:18:24.460427 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:24.460525 | orchestrator | 2026-03-23 00:18:24.460545 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-23 00:18:24.531923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-23 00:18:24.532018 | orchestrator | 2026-03-23 00:18:24.532034 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-23 00:18:24.577266 | orchestrator | ok: [testbed-manager] 2026-03-23 00:18:24.577359 | orchestrator | 2026-03-23 00:18:24.577375 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-23 00:18:25.316725 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-23 00:18:25.316820 | orchestrator | 2026-03-23 00:18:25.316831 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-23 00:18:25.413012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-23 00:18:25.413107 | orchestrator | 2026-03-23 00:18:25.413122 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-23 00:18:26.158989 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:26.159090 | orchestrator | 2026-03-23 00:18:26.159107 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-23 00:18:26.786686 | orchestrator | ok: [testbed-manager] 2026-03-23 00:18:26.786793 | orchestrator | 2026-03-23 00:18:26.786811 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-23 00:18:26.835680 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:18:26.835775 | orchestrator | 2026-03-23 00:18:26.835791 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-23 00:18:26.892827 | orchestrator | ok: [testbed-manager] 2026-03-23 00:18:26.892930 | orchestrator | 2026-03-23 00:18:26.892947 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-23 00:18:27.741465 | orchestrator | changed: [testbed-manager] 2026-03-23 00:18:27.741593 | orchestrator | 2026-03-23 00:18:27.741610 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-23 00:19:36.966802 | orchestrator | changed: [testbed-manager] 2026-03-23 00:19:36.966914 | orchestrator | 2026-03-23 00:19:36.966931 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-23 00:19:37.824212 | orchestrator | ok: [testbed-manager] 2026-03-23 00:19:37.824333 | orchestrator | 2026-03-23 00:19:37.824419 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-23 00:19:37.877492 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:19:37.877582 | orchestrator | 2026-03-23 00:19:37.877597 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-23 00:19:40.756339 | orchestrator | changed: [testbed-manager] 2026-03-23 00:19:40.756517 | orchestrator | 2026-03-23 00:19:40.756538 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-23 00:19:40.828903 | orchestrator | ok: [testbed-manager] 2026-03-23 00:19:40.829005 | orchestrator | 2026-03-23 00:19:40.829045 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-23 00:19:40.829059 | orchestrator | 2026-03-23 00:19:40.829072 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-23 00:19:40.868465 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:19:40.868563 | orchestrator | 2026-03-23 00:19:40.868578 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-23 00:20:40.912781 | orchestrator | Pausing for 60 seconds 2026-03-23 00:20:40.912893 | orchestrator | changed: [testbed-manager] 2026-03-23 00:20:40.912911 | orchestrator | 2026-03-23 00:20:40.912925 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-23 00:20:44.005354 | orchestrator | changed: [testbed-manager] 2026-03-23 00:20:44.005481 | orchestrator | 2026-03-23 00:20:44.005524 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-23 00:21:25.473255 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-23 00:21:25.473392 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-23 00:21:25.473422 | orchestrator | changed: [testbed-manager] 2026-03-23 00:21:25.473483 | orchestrator | 2026-03-23 00:21:25.473508 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-23 00:21:31.162757 | orchestrator | changed: [testbed-manager] 2026-03-23 00:21:31.162862 | orchestrator | 2026-03-23 00:21:31.162879 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-23 00:21:31.256353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-23 00:21:31.256457 | orchestrator | 2026-03-23 00:21:31.256475 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-23 00:21:31.256489 | orchestrator | 2026-03-23 00:21:31.256501 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-23 00:21:31.316240 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:21:31.316314 | orchestrator | 2026-03-23 00:21:31.316328 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-23 00:21:31.383514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-23 00:21:31.383639 | orchestrator | 2026-03-23 00:21:31.383656 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-23 00:21:32.104220 | orchestrator | changed: [testbed-manager] 2026-03-23 00:21:32.104316 | orchestrator | 2026-03-23 00:21:32.104332 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-23 00:21:35.097940 | orchestrator | ok: [testbed-manager] 2026-03-23 00:21:35.098150 | orchestrator | 2026-03-23 00:21:35.098182 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-23 00:21:35.173803 | orchestrator | ok: [testbed-manager] => { 2026-03-23 00:21:35.173923 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-23 00:21:35.173941 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-23 00:21:35.173954 | orchestrator | "Checking running containers against expected versions...", 2026-03-23 00:21:35.173967 | orchestrator | "", 2026-03-23 00:21:35.173982 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-23 00:21:35.173994 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-23 00:21:35.174006 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.174075 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-23 00:21:35.174089 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.174101 | orchestrator | "", 2026-03-23 00:21:35.174112 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-23 00:21:35.174124 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-23 00:21:35.174135 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.174147 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-23 00:21:35.174158 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.174169 | orchestrator | "", 2026-03-23 00:21:35.174181 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-23 00:21:35.174192 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-23 00:21:35.174203 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.174217 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-23 00:21:35.174236 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.174254 | orchestrator | "", 2026-03-23 00:21:35.174270 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-23 00:21:35.174287 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-23 00:21:35.174305 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.174326 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-23 00:21:35.174346 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.174363 | orchestrator | "", 2026-03-23 00:21:35.174375 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-23 00:21:35.174388 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-23 00:21:35.174426 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.174439 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-23 00:21:35.174452 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.174463 | orchestrator | "", 2026-03-23 00:21:35.174474 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-23 00:21:35.174485 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-23 00:21:35.174496 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.174507 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-23 00:21:35.174518 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.174529 | orchestrator | "", 2026-03-23 00:21:35.174540 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-23 00:21:35.174551 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-23 00:21:35.174562 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.174573 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-23 00:21:35.174583 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.174594 | orchestrator | "", 2026-03-23 00:21:35.174645 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-23 00:21:35.174658 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-23 00:21:35.174669 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.174680 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-23 00:21:35.174691 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.174702 | orchestrator | "", 2026-03-23 00:21:35.174721 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-23 00:21:35.174732 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-23 00:21:35.174748 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.174759 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-23 00:21:35.174771 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.174782 | orchestrator | "", 2026-03-23 00:21:35.174793 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-23 00:21:35.174804 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-23 00:21:35.174815 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.174826 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-23 00:21:35.174836 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.174847 | orchestrator | "", 2026-03-23 00:21:35.174858 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-23 00:21:35.174874 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-23 00:21:35.174893 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.174912 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-23 00:21:35.174931 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.174950 | orchestrator | "", 2026-03-23 00:21:35.174965 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-23 00:21:35.174976 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-23 00:21:35.174987 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.174998 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-23 00:21:35.175009 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.175019 | orchestrator | "", 2026-03-23 00:21:35.175030 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-23 00:21:35.175041 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-23 00:21:35.175052 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.175062 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-23 00:21:35.175073 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.175084 | orchestrator | "", 2026-03-23 00:21:35.175095 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-23 00:21:35.175106 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-23 00:21:35.175116 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.175127 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-23 00:21:35.175149 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.175160 | orchestrator | "", 2026-03-23 00:21:35.175171 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-23 00:21:35.175204 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-23 00:21:35.175216 | orchestrator | " Enabled: true", 2026-03-23 00:21:35.175227 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-23 00:21:35.175238 | orchestrator | " Status: ✅ MATCH", 2026-03-23 00:21:35.175249 | orchestrator | "", 2026-03-23 00:21:35.175273 | orchestrator | "=== Summary ===", 2026-03-23 00:21:35.175284 | orchestrator | "Errors (version mismatches): 0", 2026-03-23 00:21:35.175295 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-23 00:21:35.175306 | orchestrator | "", 2026-03-23 00:21:35.175317 | orchestrator | "✅ All running containers match expected versions!" 2026-03-23 00:21:35.175328 | orchestrator | ] 2026-03-23 00:21:35.175339 | orchestrator | } 2026-03-23 00:21:35.175351 | orchestrator | 2026-03-23 00:21:35.175362 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-23 00:21:35.233135 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:21:35.233216 | orchestrator | 2026-03-23 00:21:35.233230 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:21:35.233244 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-23 00:21:35.233256 | orchestrator | 2026-03-23 00:21:35.306453 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-23 00:21:35.306557 | orchestrator | + deactivate 2026-03-23 00:21:35.306574 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-23 00:21:35.306591 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-23 00:21:35.306603 | orchestrator | + export PATH 2026-03-23 00:21:35.306677 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-23 00:21:35.306689 | orchestrator | + '[' -n '' ']' 2026-03-23 00:21:35.306700 | orchestrator | + hash -r 2026-03-23 00:21:35.306711 | orchestrator | + '[' -n '' ']' 2026-03-23 00:21:35.306723 | orchestrator | + unset VIRTUAL_ENV 2026-03-23 00:21:35.306734 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-23 00:21:35.306745 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-23 00:21:35.306756 | orchestrator | + unset -f deactivate 2026-03-23 00:21:35.306768 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-23 00:21:35.312014 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-23 00:21:35.312106 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-23 00:21:35.312122 | orchestrator | + local max_attempts=60 2026-03-23 00:21:35.312137 | orchestrator | + local name=ceph-ansible 2026-03-23 00:21:35.312150 | orchestrator | + local attempt_num=1 2026-03-23 00:21:35.313176 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:21:35.349171 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-23 00:21:35.349331 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-23 00:21:35.349353 | orchestrator | + local max_attempts=60 2026-03-23 00:21:35.349367 | orchestrator | + local name=kolla-ansible 2026-03-23 00:21:35.349379 | orchestrator | + local attempt_num=1 2026-03-23 00:21:35.349465 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-23 00:21:35.377539 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-23 00:21:35.377688 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-23 00:21:35.377706 | orchestrator | + local max_attempts=60 2026-03-23 00:21:35.377722 | orchestrator | + local name=osism-ansible 2026-03-23 00:21:35.377742 | orchestrator | + local attempt_num=1 2026-03-23 00:21:35.377954 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-23 00:21:35.411395 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-23 00:21:35.411465 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-23 00:21:35.411478 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-23 00:21:36.030779 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-23 00:21:36.198311 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-23 00:21:36.198436 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-23 00:21:36.198454 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-23 00:21:36.198467 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-23 00:21:36.198479 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-23 00:21:36.198491 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-23 00:21:36.198501 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-23 00:21:36.198512 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2026-03-23 00:21:36.198540 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-23 00:21:36.198552 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-23 00:21:36.198563 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-23 00:21:36.198574 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-23 00:21:36.198585 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-23 00:21:36.198595 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-23 00:21:36.198647 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-23 00:21:36.198661 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-23 00:21:36.202753 | orchestrator | ++ semver latest 7.0.0 2026-03-23 00:21:36.246469 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-23 00:21:36.246558 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-23 00:21:36.246576 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-23 00:21:36.250813 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-23 00:21:48.481063 | orchestrator | 2026-03-23 00:21:48 | INFO  | Prepare task for execution of resolvconf. 2026-03-23 00:21:48.691982 | orchestrator | 2026-03-23 00:21:48 | INFO  | Task 248e66fe-df0d-4ac7-8f8a-37bb0726534f (resolvconf) was prepared for execution. 2026-03-23 00:21:48.692105 | orchestrator | 2026-03-23 00:21:48 | INFO  | It takes a moment until task 248e66fe-df0d-4ac7-8f8a-37bb0726534f (resolvconf) has been started and output is visible here. 2026-03-23 00:22:00.841163 | orchestrator | 2026-03-23 00:22:00.841271 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-23 00:22:00.841289 | orchestrator | 2026-03-23 00:22:00.841301 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-23 00:22:00.841313 | orchestrator | Monday 23 March 2026 00:21:51 +0000 (0:00:00.174) 0:00:00.174 ********** 2026-03-23 00:22:00.841325 | orchestrator | ok: [testbed-manager] 2026-03-23 00:22:00.841337 | orchestrator | 2026-03-23 00:22:00.841349 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-23 00:22:00.841361 | orchestrator | Monday 23 March 2026 00:21:55 +0000 (0:00:03.589) 0:00:03.764 ********** 2026-03-23 00:22:00.841372 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:22:00.841383 | orchestrator | 2026-03-23 00:22:00.841394 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-23 00:22:00.841405 | orchestrator | Monday 23 March 2026 00:21:55 +0000 (0:00:00.052) 0:00:03.817 ********** 2026-03-23 00:22:00.841417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-23 00:22:00.841429 | orchestrator | 2026-03-23 00:22:00.841441 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-23 00:22:00.841452 | orchestrator | Monday 23 March 2026 00:21:55 +0000 (0:00:00.074) 0:00:03.892 ********** 2026-03-23 00:22:00.841474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-23 00:22:00.841485 | orchestrator | 2026-03-23 00:22:00.841497 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-23 00:22:00.841508 | orchestrator | Monday 23 March 2026 00:21:55 +0000 (0:00:00.081) 0:00:03.973 ********** 2026-03-23 00:22:00.841519 | orchestrator | ok: [testbed-manager] 2026-03-23 00:22:00.841530 | orchestrator | 2026-03-23 00:22:00.841541 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-23 00:22:00.841552 | orchestrator | Monday 23 March 2026 00:21:56 +0000 (0:00:00.951) 0:00:04.924 ********** 2026-03-23 00:22:00.841564 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:22:00.841575 | orchestrator | 2026-03-23 00:22:00.841586 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-23 00:22:00.841597 | orchestrator | Monday 23 March 2026 00:21:56 +0000 (0:00:00.059) 0:00:04.984 ********** 2026-03-23 00:22:00.841613 | orchestrator | ok: [testbed-manager] 2026-03-23 00:22:00.841633 | orchestrator | 2026-03-23 00:22:00.841652 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-23 00:22:00.841753 | orchestrator | Monday 23 March 2026 00:21:57 +0000 (0:00:00.497) 0:00:05.481 ********** 2026-03-23 00:22:00.841774 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:22:00.841793 | orchestrator | 2026-03-23 00:22:00.841812 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-23 00:22:00.841833 | orchestrator | Monday 23 March 2026 00:21:57 +0000 (0:00:00.074) 0:00:05.556 ********** 2026-03-23 00:22:00.841854 | orchestrator | changed: [testbed-manager] 2026-03-23 00:22:00.841872 | orchestrator | 2026-03-23 00:22:00.841892 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-23 00:22:00.841907 | orchestrator | Monday 23 March 2026 00:21:57 +0000 (0:00:00.528) 0:00:06.084 ********** 2026-03-23 00:22:00.841919 | orchestrator | changed: [testbed-manager] 2026-03-23 00:22:00.841930 | orchestrator | 2026-03-23 00:22:00.841941 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-23 00:22:00.841952 | orchestrator | Monday 23 March 2026 00:21:58 +0000 (0:00:00.986) 0:00:07.071 ********** 2026-03-23 00:22:00.841963 | orchestrator | ok: [testbed-manager] 2026-03-23 00:22:00.841974 | orchestrator | 2026-03-23 00:22:00.842086 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-23 00:22:00.842115 | orchestrator | Monday 23 March 2026 00:21:59 +0000 (0:00:00.867) 0:00:07.938 ********** 2026-03-23 00:22:00.842128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-23 00:22:00.842139 | orchestrator | 2026-03-23 00:22:00.842150 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-23 00:22:00.842161 | orchestrator | Monday 23 March 2026 00:21:59 +0000 (0:00:00.091) 0:00:08.029 ********** 2026-03-23 00:22:00.842172 | orchestrator | changed: [testbed-manager] 2026-03-23 00:22:00.842183 | orchestrator | 2026-03-23 00:22:00.842194 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:22:00.842242 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-23 00:22:00.842254 | orchestrator | 2026-03-23 00:22:00.842265 | orchestrator | 2026-03-23 00:22:00.842276 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:22:00.842287 | orchestrator | Monday 23 March 2026 00:22:00 +0000 (0:00:01.050) 0:00:09.080 ********** 2026-03-23 00:22:00.842299 | orchestrator | =============================================================================== 2026-03-23 00:22:00.842310 | orchestrator | Gathering Facts --------------------------------------------------------- 3.59s 2026-03-23 00:22:00.842321 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.05s 2026-03-23 00:22:00.842332 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.99s 2026-03-23 00:22:00.842343 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.95s 2026-03-23 00:22:00.842354 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.87s 2026-03-23 00:22:00.842365 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2026-03-23 00:22:00.842397 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2026-03-23 00:22:00.842409 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-03-23 00:22:00.842423 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-03-23 00:22:00.842442 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-03-23 00:22:00.842461 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-03-23 00:22:00.842479 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-23 00:22:00.842498 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2026-03-23 00:22:00.977479 | orchestrator | + osism apply sshconfig 2026-03-23 00:22:12.159769 | orchestrator | 2026-03-23 00:22:12 | INFO  | Prepare task for execution of sshconfig. 2026-03-23 00:22:12.232602 | orchestrator | 2026-03-23 00:22:12 | INFO  | Task 953aeb47-d019-49e4-b1e8-d68bc8fbd0b1 (sshconfig) was prepared for execution. 2026-03-23 00:22:12.232780 | orchestrator | 2026-03-23 00:22:12 | INFO  | It takes a moment until task 953aeb47-d019-49e4-b1e8-d68bc8fbd0b1 (sshconfig) has been started and output is visible here. 2026-03-23 00:22:23.319892 | orchestrator | 2026-03-23 00:22:23.320026 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-23 00:22:23.320046 | orchestrator | 2026-03-23 00:22:23.320059 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-23 00:22:23.320071 | orchestrator | Monday 23 March 2026 00:22:15 +0000 (0:00:00.188) 0:00:00.188 ********** 2026-03-23 00:22:23.320083 | orchestrator | ok: [testbed-manager] 2026-03-23 00:22:23.320149 | orchestrator | 2026-03-23 00:22:23.320163 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-23 00:22:23.320175 | orchestrator | Monday 23 March 2026 00:22:16 +0000 (0:00:00.930) 0:00:01.119 ********** 2026-03-23 00:22:23.320209 | orchestrator | changed: [testbed-manager] 2026-03-23 00:22:23.320222 | orchestrator | 2026-03-23 00:22:23.320233 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-23 00:22:23.320244 | orchestrator | Monday 23 March 2026 00:22:16 +0000 (0:00:00.551) 0:00:01.670 ********** 2026-03-23 00:22:23.320270 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-23 00:22:23.320283 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-23 00:22:23.320296 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-23 00:22:23.320308 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-23 00:22:23.320320 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-23 00:22:23.320332 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-23 00:22:23.320343 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-23 00:22:23.320355 | orchestrator | 2026-03-23 00:22:23.320368 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-23 00:22:23.320380 | orchestrator | Monday 23 March 2026 00:22:22 +0000 (0:00:05.655) 0:00:07.326 ********** 2026-03-23 00:22:23.320399 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:22:23.320423 | orchestrator | 2026-03-23 00:22:23.320450 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-23 00:22:23.320469 | orchestrator | Monday 23 March 2026 00:22:22 +0000 (0:00:00.116) 0:00:07.443 ********** 2026-03-23 00:22:23.320488 | orchestrator | changed: [testbed-manager] 2026-03-23 00:22:23.320506 | orchestrator | 2026-03-23 00:22:23.320522 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:22:23.320544 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:22:23.320564 | orchestrator | 2026-03-23 00:22:23.320584 | orchestrator | 2026-03-23 00:22:23.320605 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:22:23.320625 | orchestrator | Monday 23 March 2026 00:22:23 +0000 (0:00:00.528) 0:00:07.971 ********** 2026-03-23 00:22:23.320644 | orchestrator | =============================================================================== 2026-03-23 00:22:23.320665 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.66s 2026-03-23 00:22:23.320684 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.93s 2026-03-23 00:22:23.320730 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2026-03-23 00:22:23.320743 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.53s 2026-03-23 00:22:23.320754 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.12s 2026-03-23 00:22:23.476259 | orchestrator | + osism apply known-hosts 2026-03-23 00:22:34.770867 | orchestrator | 2026-03-23 00:22:34 | INFO  | Prepare task for execution of known-hosts. 2026-03-23 00:22:34.831211 | orchestrator | 2026-03-23 00:22:34 | INFO  | Task 44aeadce-bdf4-4e4a-a791-d06050325c04 (known-hosts) was prepared for execution. 2026-03-23 00:22:34.831302 | orchestrator | 2026-03-23 00:22:34 | INFO  | It takes a moment until task 44aeadce-bdf4-4e4a-a791-d06050325c04 (known-hosts) has been started and output is visible here. 2026-03-23 00:22:49.112673 | orchestrator | 2026-03-23 00:22:49.112877 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-23 00:22:49.112909 | orchestrator | 2026-03-23 00:22:49.112946 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-23 00:22:49.113878 | orchestrator | Monday 23 March 2026 00:22:37 +0000 (0:00:00.173) 0:00:00.173 ********** 2026-03-23 00:22:49.113946 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-23 00:22:49.113956 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-23 00:22:49.113963 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-23 00:22:49.113993 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-23 00:22:49.114000 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-23 00:22:49.114007 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-23 00:22:49.114050 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-23 00:22:49.114058 | orchestrator | 2026-03-23 00:22:49.114065 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-23 00:22:49.114072 | orchestrator | Monday 23 March 2026 00:22:43 +0000 (0:00:06.170) 0:00:06.344 ********** 2026-03-23 00:22:49.114091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-23 00:22:49.114101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-23 00:22:49.114107 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-23 00:22:49.114114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-23 00:22:49.114130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-23 00:22:49.114137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-23 00:22:49.114143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-23 00:22:49.114149 | orchestrator | 2026-03-23 00:22:49.114156 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:22:49.114163 | orchestrator | Monday 23 March 2026 00:22:43 +0000 (0:00:00.169) 0:00:06.513 ********** 2026-03-23 00:22:49.114172 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDD1yL6U5rhbLennWKqtfOkEU2Z8UrIx1XjI4cH6rsJo1B4f608hTaaz17YKo3kWL4egN2vl7FM94XN5GY89IAbxmOlxRg+zLfg1GfABkkSQa1+Mm5B0ZT4AzpTyQddqe+OhPPIUVPPNNeRqw0F6ALy2F6hQeWYeqzyQhBtk+HbfhG1IQjK2LeFphzIApocoa3VEBi+i4A5dRYWviZ8ptdxVPmZE5NX4v5zqq0vzMLquqclsNjqhybq0XbxMWj0jncl/mSrjBEHn7pxKbdP8PI9gnOYAvPEiMs0MG6IQwPP6r2HyoYo2Uebtvi6zMDPprTTOLQ7fa/EzabPoS2S9ekTN1oOJw5isrXi4UPuTV9mDi8BDor7sk3czLazpOUmhOjhmowB7jMKVCRg6hZtSwugeKR45Emj/L5NDv7muWKqCjMxyFgjqziXqdbhj/R3ay/iyy4fndVFKxeGySpzNQXl7YFTYJQ4x0PVCerpion+ADttCpiY5q1h6hXWjqYlEY8=) 2026-03-23 00:22:49.114182 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJS16vikmJBasEINpV1kyWaIdnnxVn/RGxoQRn7oa0dcXSwxv1gvqhqTBAYqYafq6ReR/2zm6P/cI7+e+i+4j68=) 2026-03-23 00:22:49.114191 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGrYzzSum0bMyidwAUVNLnojpAqWQSUDzlo2Jhjk4Fe/) 2026-03-23 00:22:49.114199 | orchestrator | 2026-03-23 00:22:49.114206 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:22:49.114212 | orchestrator | Monday 23 March 2026 00:22:45 +0000 (0:00:01.059) 0:00:07.573 ********** 2026-03-23 00:22:49.114242 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtqYYvz/qboEduwWDTsdQ6DcxNWzBPgVUFXav4x+JxSzn6AW1lkqlTbIp1mrjicOYfUermBHm2G9M3ou2bKnFcO3bLetgAgkbnCyp41CJRrd8Y3TQGkHayNEGICmz5ZW/1PmWr/SGBm9FTaTsAtt9rO7mCR2s7UHk3OvZ4piHR4ThjM3wytNeIpOuDwYYyOtE0Znh7px1wSAvx7dKq7yPpEmkPW6gp/B5cssoakdgwwm5gSEvUvNdK58wQhmLFcO6dOGm+vnCxKQBwXlKniy9cpnIGmqJ3I72OBBHFMvFUbzYAkSocExILcfx2xDC/xefSCj2Osy1NbJtQrh+GyOH/iHNMD2G0hxF6xDjhbzAQ39qecRmtubsIKQqspsYntdB1lYXbsWBbLCLgRUq9G6TnZMCzNdKPbDO+oO709QqS36TR6SHQ4itLmB9Il2PN7oBazwdiKdMnNw8H0auCP+oK1WEpmTCc1rkAhwYQXNx5tYV6DnKYggM3IXI77DpILo0=) 2026-03-23 00:22:49.114256 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDau7bKA6J9f4Zqdz7g4jd6UgCDqapzN4d0GJW0pRDDq) 2026-03-23 00:22:49.114262 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBtsc545Q1vo90VkKpPLQlIpwHVNiqENz4UkrQ+SgAVTyuIgyEi71m2L7BOK/v1cp2ibVOgXqx/L5g6nPD9SkRE=) 2026-03-23 00:22:49.114269 | orchestrator | 2026-03-23 00:22:49.114276 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:22:49.114282 | orchestrator | Monday 23 March 2026 00:22:46 +0000 (0:00:00.985) 0:00:08.558 ********** 2026-03-23 00:22:49.114288 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID0nDW1eiAXJAQrpHBdRHiM7qBZBBNzeg/dURQNm7UNq) 2026-03-23 00:22:49.114295 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDN21u1+LY/6TSCHH/t9x2i6SjKswY+z6nv0TKcWOlyx8Kp1mB/zAEPVXpgGOjTkkcr7kY5Su5YqLXEMF1xmcq5BR7qbpeSs6ICJ6D4UjCeQZ1hnjr4P/7jO1WJlWqxEofYTXvn0cTUrWaRxeamB2IowGzIjLU7pVp8dLljHblwdAjFysZ1kDbY0GwFmsHIC9EylPw7bhOjNVq4zlQsD5bscb7JdWqkNnm1xnY0RTxNpgxbOrXiGsnCrPaH2/KRdbYUyw/wq9f+F6UHkhVxOzvb+twug5fcvQ2FjRybu9bWJmGCXfmAi6te15l8UzO7VcZS/SdjZKyAMEOlPFjoT0mYtnXpF5GSO1aZXjPFL6muEaX197eC1SYMTnuoMg8taKXRGRvvciQHTAjmKn6l+cfVqoH6YusmvnKtUA48toQeGFhGBS3lr3BKyOOs5GQIr8dNITGRdcU9k2LNydk6o1MMtFf1tTGFHO26FBtfi6VCPXKM5YGSMj14dQGxfrVe0LM=) 2026-03-23 00:22:49.114350 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIw914pLh5D5awgb6agu2sfjAoJxplUc0dEexl3EAVqq9O35f2NCKDhTD0weqIQyc+1cM7VFlTCik0sc/2NmXjA=) 2026-03-23 00:22:49.114357 | orchestrator | 2026-03-23 00:22:49.114364 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:22:49.114370 | orchestrator | Monday 23 March 2026 00:22:46 +0000 (0:00:00.903) 0:00:09.461 ********** 2026-03-23 00:22:49.114377 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdFh3U3t0nJOueQ42k3jzvjsf4yTzDbC9YQ1nnXAAc1cYGl7ot7PaqCOMLcJefoeTY2vGGvd6/bnDlHDv0m2mZ4XVTgYlVBI8mGcprTyOzP2SCxQijVeyAwhJR3dywTwPx7oypaW7shhc/ceAmXcDyWa5iBf5m8RjIrE/n6hYjjRmdlHaARhBngE9lizwCrbZWkXavmCcUECnWcuu3zqbCcqfJ2SooF+WR4i0pblcEM40/f1LgyrlRXhFAVkAl3YrsTxNcAfdAloL/Y2nFTOM1+A5yoqRY+Fm6Zzy/Yn8LNuj0ec8yQpbzs5kLwV5TSeAvcEalu7gzl5s5AZqBu6EDs4s54Xbp1Qm7k+cPPlXJegR3B1k0p1v4MJZ8r0k5wEwsZeLXXJPPkFAbk3aY3jqGlEa/1I1V/O5hvJoJGFolTFjN/eTsX9wqiHMH1KWQGRnG/pF+BKdZPylZom3oFcQl8Yvrr7J6sfFCzcPHA3BNtw5P4cU/7WmFicXmBtQP6zU=) 2026-03-23 00:22:49.114383 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKLvo/S287pGJWHFEdYoNSr2NfJCsJYYTQXF8ptDMezC5rfm67+pyK51uo1rCZfLftSzuyVgZYAgkLXPXbsaMV4=) 2026-03-23 00:22:49.114390 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDt8trvfHZQUz0el7/n9AdqHSVgOnGN7ohPeJe/Wg41N) 2026-03-23 00:22:49.114396 | orchestrator | 2026-03-23 00:22:49.114403 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:22:49.114409 | orchestrator | Monday 23 March 2026 00:22:47 +0000 (0:00:00.914) 0:00:10.376 ********** 2026-03-23 00:22:49.114415 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXmohAjIk3A55FneeVHMWIhpo5zX3szNN0rGCH97DXYiqK+xl5TxT19vW7zsGeDpmTZpbnRvomXgAB8LP9eSncs6fqloJKTK/wgvRU5Jdf9K40e+nbSrRDlyL2W8ZkBfwFU/GWM9DiUyieA5NOMX/tLo0EGTT+gUS9u9kK17QU0j7l3O54FNA56F/sTEOHbMtS1tg4IDeFKko7nBjSFZokcDALgsk5BAqPDoZtXh7RjoEN5N4fE/KwgRpjRo0uT/NA4cLx/EmcdI0bQEbvxO4moNC4R/v1jbGt7VMH377gsH1GKT8oSuZPqP3q9fCnP5ljzywheDc40Vc8cqnSbb3W42lrj3ad6PNbTph3fp4N8miVY2EWzhmHtaSyXEbSSXOy/ahu9g3V/+/JhOZgAPwoJwGsfXz9IwGbjFUiOOlDSYamQUWnIOAbTK9mFaOaoz1brf66yvnzZM9U0G/rE65GVaD2XPH+bp/BF9wK2cXPcVitSfg+lLc/d6cwiRm+Ouc=) 2026-03-23 00:22:49.114426 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ8aCVmOjEqXI2/dXL1sXMYg1vgr7VXIHtHc/Pjif39oboSIpUF5bu+mJuO03ecuTPZMmcE+3xiUMiENnZu4uQs=) 2026-03-23 00:22:49.114433 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH+ZTjeTaecVg1g1FhbfVBRR+5J1Ao8qZOKOFTHGGLa4) 2026-03-23 00:22:49.114439 | orchestrator | 2026-03-23 00:22:49.114445 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:22:49.114452 | orchestrator | Monday 23 March 2026 00:22:48 +0000 (0:00:00.952) 0:00:11.328 ********** 2026-03-23 00:22:49.114465 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCylz7JM+oNf0CQ4tU9A7Uc7Rwpn53x+jLTl6nZYOkGx4ehVCgd0PFSQYej5/gTO7WVQsIioBkwPNRSZuGV+5uw3cm/7wTrLb41OR8U4XjEXRn+pTuRGvUuOlI6+iIq8vX8ua7klzCXdWl05oN3gaC/8IbIj7092dv0qZCNH1NF+CEc0zfCSOST+L4xkvaeRtHilPvKx3px0/YXxXCXEjOHDGvKbgSFcn15hFsbpJk3uVqd2F8no5bVco9zepj8XO5tVW/UOJyqkpsh4r6QpS8/VCHAo9dknzoDxBtzEtj+xtuyouq4gDhI666DDhqV2S+DRphMQCeHOiHsXjv/GW+PTFHv01u4eT+5YgFWn9Vtfwedb+oXV+I2ldoo8yeiKzgGVMQXlGsgHHXKiw7qrdvpChYUGY7Pz8EbYCL3OIrQSAp4tu9xXRPFEZU/eyjIk7/rFAhVNKrkvM5Uw4XOFQRIJaQ/TC/to6BG47I+0Hp/ioS7xbA9V94bAQoWW3ODrbk=) 2026-03-23 00:22:59.686963 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOOujIrIOUxWUD/emBIQn6fZqPxGZ9K/op6mt7u29xx01JOJv8KVHZZ5wABYeRRRRu1rH4Eld0Ux/6MdY7bq5bM=) 2026-03-23 00:22:59.687078 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIXyYcqJKH3v/Gu2Q1J4kf5sohnV9pbIna8aFUyiGzVa) 2026-03-23 00:22:59.687095 | orchestrator | 2026-03-23 00:22:59.687108 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:22:59.687122 | orchestrator | Monday 23 March 2026 00:22:49 +0000 (0:00:00.912) 0:00:12.240 ********** 2026-03-23 00:22:59.687136 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDP3RUSb1/hwSS89OQqHq7OSC3G9SbaSfO7Pi+CK7wKNaKuRqSwyc1srwszWVA2SyVj6OywFTZph700fM2C9VolEDX9y5OIAIFENeQFIQq/iaqjD06e9C62IdGhHs6GVrxVEwC84RmMfkS1nY8EZ2SqHTQbKW5a7EgI8cIC3d0cnfN+MUcqWiDeyDZTPmBf/zIZlC6JJKUm39PV7x5BkDvHNtrYLwn7UhR9SM0tJYccQ63tN/Sv/bWIb31Qfq6CM5odrZrj7iJXCN9w6gS/vTwb1xX5l8fpFEJ8B1elBPxcWqgHYAisdddfiEzvl+RyFmXzmk39mIqyM5HjYI+dXuNdNxEMGmv/WKwK37tlnn/RqTUssNBild2JPREKbebbJHmoc5Z5g4sa6N/pyBKGjto/QRDcZ+SPWTzy5nDDKoDJhAlFrEWqjy06ohbehzAgsXSgYhXgYZCXyCcrVnof7zoo3PHfMzLMy0B4b5m/YGMrI5+rZuo5W/ZSbXHlNF+d3nk=) 2026-03-23 00:22:59.687150 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIvwMWB6ReYbLIeku7lLJEoDHuDAfXM8HjvmEuAVxjDY62vULSRJ+dpHyDf0kR1/MOMUd+AvzS4DKIV6Q1+cxM4=) 2026-03-23 00:22:59.687162 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJdDc6NnBpnZVYQWdSKho7yzE6oqVxcpdagPUWixnaMP) 2026-03-23 00:22:59.687173 | orchestrator | 2026-03-23 00:22:59.687185 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-23 00:22:59.687197 | orchestrator | Monday 23 March 2026 00:22:50 +0000 (0:00:00.940) 0:00:13.181 ********** 2026-03-23 00:22:59.687208 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-23 00:22:59.687220 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-23 00:22:59.687231 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-23 00:22:59.687242 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-23 00:22:59.687253 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-23 00:22:59.687284 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-23 00:22:59.687295 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-23 00:22:59.687327 | orchestrator | 2026-03-23 00:22:59.687338 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-23 00:22:59.687350 | orchestrator | Monday 23 March 2026 00:22:55 +0000 (0:00:05.003) 0:00:18.184 ********** 2026-03-23 00:22:59.687362 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-23 00:22:59.687375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-23 00:22:59.687387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-23 00:22:59.687398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-23 00:22:59.687408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-23 00:22:59.687419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-23 00:22:59.687430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-23 00:22:59.687441 | orchestrator | 2026-03-23 00:22:59.687453 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:22:59.687464 | orchestrator | Monday 23 March 2026 00:22:55 +0000 (0:00:00.154) 0:00:18.339 ********** 2026-03-23 00:22:59.687475 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGrYzzSum0bMyidwAUVNLnojpAqWQSUDzlo2Jhjk4Fe/) 2026-03-23 00:22:59.687510 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDD1yL6U5rhbLennWKqtfOkEU2Z8UrIx1XjI4cH6rsJo1B4f608hTaaz17YKo3kWL4egN2vl7FM94XN5GY89IAbxmOlxRg+zLfg1GfABkkSQa1+Mm5B0ZT4AzpTyQddqe+OhPPIUVPPNNeRqw0F6ALy2F6hQeWYeqzyQhBtk+HbfhG1IQjK2LeFphzIApocoa3VEBi+i4A5dRYWviZ8ptdxVPmZE5NX4v5zqq0vzMLquqclsNjqhybq0XbxMWj0jncl/mSrjBEHn7pxKbdP8PI9gnOYAvPEiMs0MG6IQwPP6r2HyoYo2Uebtvi6zMDPprTTOLQ7fa/EzabPoS2S9ekTN1oOJw5isrXi4UPuTV9mDi8BDor7sk3czLazpOUmhOjhmowB7jMKVCRg6hZtSwugeKR45Emj/L5NDv7muWKqCjMxyFgjqziXqdbhj/R3ay/iyy4fndVFKxeGySpzNQXl7YFTYJQ4x0PVCerpion+ADttCpiY5q1h6hXWjqYlEY8=) 2026-03-23 00:22:59.687526 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJS16vikmJBasEINpV1kyWaIdnnxVn/RGxoQRn7oa0dcXSwxv1gvqhqTBAYqYafq6ReR/2zm6P/cI7+e+i+4j68=) 2026-03-23 00:22:59.687539 | orchestrator | 2026-03-23 00:22:59.687551 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:22:59.687564 | orchestrator | Monday 23 March 2026 00:22:56 +0000 (0:00:01.007) 0:00:19.347 ********** 2026-03-23 00:22:59.687577 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDau7bKA6J9f4Zqdz7g4jd6UgCDqapzN4d0GJW0pRDDq) 2026-03-23 00:22:59.687592 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtqYYvz/qboEduwWDTsdQ6DcxNWzBPgVUFXav4x+JxSzn6AW1lkqlTbIp1mrjicOYfUermBHm2G9M3ou2bKnFcO3bLetgAgkbnCyp41CJRrd8Y3TQGkHayNEGICmz5ZW/1PmWr/SGBm9FTaTsAtt9rO7mCR2s7UHk3OvZ4piHR4ThjM3wytNeIpOuDwYYyOtE0Znh7px1wSAvx7dKq7yPpEmkPW6gp/B5cssoakdgwwm5gSEvUvNdK58wQhmLFcO6dOGm+vnCxKQBwXlKniy9cpnIGmqJ3I72OBBHFMvFUbzYAkSocExILcfx2xDC/xefSCj2Osy1NbJtQrh+GyOH/iHNMD2G0hxF6xDjhbzAQ39qecRmtubsIKQqspsYntdB1lYXbsWBbLCLgRUq9G6TnZMCzNdKPbDO+oO709QqS36TR6SHQ4itLmB9Il2PN7oBazwdiKdMnNw8H0auCP+oK1WEpmTCc1rkAhwYQXNx5tYV6DnKYggM3IXI77DpILo0=) 2026-03-23 00:22:59.687613 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBtsc545Q1vo90VkKpPLQlIpwHVNiqENz4UkrQ+SgAVTyuIgyEi71m2L7BOK/v1cp2ibVOgXqx/L5g6nPD9SkRE=) 2026-03-23 00:22:59.687626 | orchestrator | 2026-03-23 00:22:59.687639 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:22:59.687652 | orchestrator | Monday 23 March 2026 00:22:57 +0000 (0:00:01.024) 0:00:20.371 ********** 2026-03-23 00:22:59.687665 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDN21u1+LY/6TSCHH/t9x2i6SjKswY+z6nv0TKcWOlyx8Kp1mB/zAEPVXpgGOjTkkcr7kY5Su5YqLXEMF1xmcq5BR7qbpeSs6ICJ6D4UjCeQZ1hnjr4P/7jO1WJlWqxEofYTXvn0cTUrWaRxeamB2IowGzIjLU7pVp8dLljHblwdAjFysZ1kDbY0GwFmsHIC9EylPw7bhOjNVq4zlQsD5bscb7JdWqkNnm1xnY0RTxNpgxbOrXiGsnCrPaH2/KRdbYUyw/wq9f+F6UHkhVxOzvb+twug5fcvQ2FjRybu9bWJmGCXfmAi6te15l8UzO7VcZS/SdjZKyAMEOlPFjoT0mYtnXpF5GSO1aZXjPFL6muEaX197eC1SYMTnuoMg8taKXRGRvvciQHTAjmKn6l+cfVqoH6YusmvnKtUA48toQeGFhGBS3lr3BKyOOs5GQIr8dNITGRdcU9k2LNydk6o1MMtFf1tTGFHO26FBtfi6VCPXKM5YGSMj14dQGxfrVe0LM=) 2026-03-23 00:22:59.687679 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIw914pLh5D5awgb6agu2sfjAoJxplUc0dEexl3EAVqq9O35f2NCKDhTD0weqIQyc+1cM7VFlTCik0sc/2NmXjA=) 2026-03-23 00:22:59.687692 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID0nDW1eiAXJAQrpHBdRHiM7qBZBBNzeg/dURQNm7UNq) 2026-03-23 00:22:59.687705 | orchestrator | 2026-03-23 00:22:59.687717 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:22:59.687730 | orchestrator | Monday 23 March 2026 00:22:58 +0000 (0:00:00.915) 0:00:21.287 ********** 2026-03-23 00:22:59.687749 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdFh3U3t0nJOueQ42k3jzvjsf4yTzDbC9YQ1nnXAAc1cYGl7ot7PaqCOMLcJefoeTY2vGGvd6/bnDlHDv0m2mZ4XVTgYlVBI8mGcprTyOzP2SCxQijVeyAwhJR3dywTwPx7oypaW7shhc/ceAmXcDyWa5iBf5m8RjIrE/n6hYjjRmdlHaARhBngE9lizwCrbZWkXavmCcUECnWcuu3zqbCcqfJ2SooF+WR4i0pblcEM40/f1LgyrlRXhFAVkAl3YrsTxNcAfdAloL/Y2nFTOM1+A5yoqRY+Fm6Zzy/Yn8LNuj0ec8yQpbzs5kLwV5TSeAvcEalu7gzl5s5AZqBu6EDs4s54Xbp1Qm7k+cPPlXJegR3B1k0p1v4MJZ8r0k5wEwsZeLXXJPPkFAbk3aY3jqGlEa/1I1V/O5hvJoJGFolTFjN/eTsX9wqiHMH1KWQGRnG/pF+BKdZPylZom3oFcQl8Yvrr7J6sfFCzcPHA3BNtw5P4cU/7WmFicXmBtQP6zU=) 2026-03-23 00:22:59.687762 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKLvo/S287pGJWHFEdYoNSr2NfJCsJYYTQXF8ptDMezC5rfm67+pyK51uo1rCZfLftSzuyVgZYAgkLXPXbsaMV4=) 2026-03-23 00:22:59.687806 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDt8trvfHZQUz0el7/n9AdqHSVgOnGN7ohPeJe/Wg41N) 2026-03-23 00:23:03.727357 | orchestrator | 2026-03-23 00:23:03.727461 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:23:03.727478 | orchestrator | Monday 23 March 2026 00:22:59 +0000 (0:00:00.955) 0:00:22.242 ********** 2026-03-23 00:23:03.727491 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ8aCVmOjEqXI2/dXL1sXMYg1vgr7VXIHtHc/Pjif39oboSIpUF5bu+mJuO03ecuTPZMmcE+3xiUMiENnZu4uQs=) 2026-03-23 00:23:03.727503 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH+ZTjeTaecVg1g1FhbfVBRR+5J1Ao8qZOKOFTHGGLa4) 2026-03-23 00:23:03.727537 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXmohAjIk3A55FneeVHMWIhpo5zX3szNN0rGCH97DXYiqK+xl5TxT19vW7zsGeDpmTZpbnRvomXgAB8LP9eSncs6fqloJKTK/wgvRU5Jdf9K40e+nbSrRDlyL2W8ZkBfwFU/GWM9DiUyieA5NOMX/tLo0EGTT+gUS9u9kK17QU0j7l3O54FNA56F/sTEOHbMtS1tg4IDeFKko7nBjSFZokcDALgsk5BAqPDoZtXh7RjoEN5N4fE/KwgRpjRo0uT/NA4cLx/EmcdI0bQEbvxO4moNC4R/v1jbGt7VMH377gsH1GKT8oSuZPqP3q9fCnP5ljzywheDc40Vc8cqnSbb3W42lrj3ad6PNbTph3fp4N8miVY2EWzhmHtaSyXEbSSXOy/ahu9g3V/+/JhOZgAPwoJwGsfXz9IwGbjFUiOOlDSYamQUWnIOAbTK9mFaOaoz1brf66yvnzZM9U0G/rE65GVaD2XPH+bp/BF9wK2cXPcVitSfg+lLc/d6cwiRm+Ouc=) 2026-03-23 00:23:03.727575 | orchestrator | 2026-03-23 00:23:03.727587 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:23:03.727598 | orchestrator | Monday 23 March 2026 00:23:00 +0000 (0:00:00.967) 0:00:23.210 ********** 2026-03-23 00:23:03.727609 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIXyYcqJKH3v/Gu2Q1J4kf5sohnV9pbIna8aFUyiGzVa) 2026-03-23 00:23:03.727621 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCylz7JM+oNf0CQ4tU9A7Uc7Rwpn53x+jLTl6nZYOkGx4ehVCgd0PFSQYej5/gTO7WVQsIioBkwPNRSZuGV+5uw3cm/7wTrLb41OR8U4XjEXRn+pTuRGvUuOlI6+iIq8vX8ua7klzCXdWl05oN3gaC/8IbIj7092dv0qZCNH1NF+CEc0zfCSOST+L4xkvaeRtHilPvKx3px0/YXxXCXEjOHDGvKbgSFcn15hFsbpJk3uVqd2F8no5bVco9zepj8XO5tVW/UOJyqkpsh4r6QpS8/VCHAo9dknzoDxBtzEtj+xtuyouq4gDhI666DDhqV2S+DRphMQCeHOiHsXjv/GW+PTFHv01u4eT+5YgFWn9Vtfwedb+oXV+I2ldoo8yeiKzgGVMQXlGsgHHXKiw7qrdvpChYUGY7Pz8EbYCL3OIrQSAp4tu9xXRPFEZU/eyjIk7/rFAhVNKrkvM5Uw4XOFQRIJaQ/TC/to6BG47I+0Hp/ioS7xbA9V94bAQoWW3ODrbk=) 2026-03-23 00:23:03.727633 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOOujIrIOUxWUD/emBIQn6fZqPxGZ9K/op6mt7u29xx01JOJv8KVHZZ5wABYeRRRRu1rH4Eld0Ux/6MdY7bq5bM=) 2026-03-23 00:23:03.727644 | orchestrator | 2026-03-23 00:23:03.727655 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-23 00:23:03.727666 | orchestrator | Monday 23 March 2026 00:23:01 +0000 (0:00:01.012) 0:00:24.222 ********** 2026-03-23 00:23:03.727677 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDP3RUSb1/hwSS89OQqHq7OSC3G9SbaSfO7Pi+CK7wKNaKuRqSwyc1srwszWVA2SyVj6OywFTZph700fM2C9VolEDX9y5OIAIFENeQFIQq/iaqjD06e9C62IdGhHs6GVrxVEwC84RmMfkS1nY8EZ2SqHTQbKW5a7EgI8cIC3d0cnfN+MUcqWiDeyDZTPmBf/zIZlC6JJKUm39PV7x5BkDvHNtrYLwn7UhR9SM0tJYccQ63tN/Sv/bWIb31Qfq6CM5odrZrj7iJXCN9w6gS/vTwb1xX5l8fpFEJ8B1elBPxcWqgHYAisdddfiEzvl+RyFmXzmk39mIqyM5HjYI+dXuNdNxEMGmv/WKwK37tlnn/RqTUssNBild2JPREKbebbJHmoc5Z5g4sa6N/pyBKGjto/QRDcZ+SPWTzy5nDDKoDJhAlFrEWqjy06ohbehzAgsXSgYhXgYZCXyCcrVnof7zoo3PHfMzLMy0B4b5m/YGMrI5+rZuo5W/ZSbXHlNF+d3nk=) 2026-03-23 00:23:03.727689 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIvwMWB6ReYbLIeku7lLJEoDHuDAfXM8HjvmEuAVxjDY62vULSRJ+dpHyDf0kR1/MOMUd+AvzS4DKIV6Q1+cxM4=) 2026-03-23 00:23:03.727700 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJdDc6NnBpnZVYQWdSKho7yzE6oqVxcpdagPUWixnaMP) 2026-03-23 00:23:03.727712 | orchestrator | 2026-03-23 00:23:03.727723 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-23 00:23:03.727734 | orchestrator | Monday 23 March 2026 00:23:02 +0000 (0:00:01.015) 0:00:25.238 ********** 2026-03-23 00:23:03.727745 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-23 00:23:03.727757 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-23 00:23:03.727767 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-23 00:23:03.727839 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-23 00:23:03.727850 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-23 00:23:03.727861 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-23 00:23:03.727872 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-23 00:23:03.727883 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:23:03.727894 | orchestrator | 2026-03-23 00:23:03.727924 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-23 00:23:03.727937 | orchestrator | Monday 23 March 2026 00:23:02 +0000 (0:00:00.197) 0:00:25.435 ********** 2026-03-23 00:23:03.727959 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:23:03.727972 | orchestrator | 2026-03-23 00:23:03.727985 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-23 00:23:03.727997 | orchestrator | Monday 23 March 2026 00:23:02 +0000 (0:00:00.051) 0:00:25.486 ********** 2026-03-23 00:23:03.728010 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:23:03.728022 | orchestrator | 2026-03-23 00:23:03.728036 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-23 00:23:03.728049 | orchestrator | Monday 23 March 2026 00:23:03 +0000 (0:00:00.058) 0:00:25.545 ********** 2026-03-23 00:23:03.728062 | orchestrator | changed: [testbed-manager] 2026-03-23 00:23:03.728074 | orchestrator | 2026-03-23 00:23:03.728087 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:23:03.728101 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-23 00:23:03.728116 | orchestrator | 2026-03-23 00:23:03.728129 | orchestrator | 2026-03-23 00:23:03.728141 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:23:03.728154 | orchestrator | Monday 23 March 2026 00:23:03 +0000 (0:00:00.499) 0:00:26.044 ********** 2026-03-23 00:23:03.728167 | orchestrator | =============================================================================== 2026-03-23 00:23:03.728179 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.17s 2026-03-23 00:23:03.728192 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.00s 2026-03-23 00:23:03.728206 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-23 00:23:03.728218 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-23 00:23:03.728232 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-23 00:23:03.728245 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-23 00:23:03.728258 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-23 00:23:03.728269 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-03-23 00:23:03.728279 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-03-23 00:23:03.728290 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-03-23 00:23:03.728301 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-03-23 00:23:03.728320 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2026-03-23 00:23:03.728331 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.92s 2026-03-23 00:23:03.728342 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.91s 2026-03-23 00:23:03.728353 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.91s 2026-03-23 00:23:03.728364 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.90s 2026-03-23 00:23:03.728375 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.50s 2026-03-23 00:23:03.728386 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.20s 2026-03-23 00:23:03.728397 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-03-23 00:23:03.728408 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2026-03-23 00:23:03.887860 | orchestrator | + osism apply squid 2026-03-23 00:23:15.111497 | orchestrator | 2026-03-23 00:23:15 | INFO  | Prepare task for execution of squid. 2026-03-23 00:23:15.185841 | orchestrator | 2026-03-23 00:23:15 | INFO  | Task 119aa3e9-54e8-4c38-b7e3-129f0979a7df (squid) was prepared for execution. 2026-03-23 00:23:15.185958 | orchestrator | 2026-03-23 00:23:15 | INFO  | It takes a moment until task 119aa3e9-54e8-4c38-b7e3-129f0979a7df (squid) has been started and output is visible here. 2026-03-23 00:25:07.755178 | orchestrator | 2026-03-23 00:25:07.755275 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-23 00:25:07.755292 | orchestrator | 2026-03-23 00:25:07.755308 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-23 00:25:07.755330 | orchestrator | Monday 23 March 2026 00:23:18 +0000 (0:00:00.179) 0:00:00.179 ********** 2026-03-23 00:25:07.755349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-23 00:25:07.755370 | orchestrator | 2026-03-23 00:25:07.755390 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-23 00:25:07.755409 | orchestrator | Monday 23 March 2026 00:23:18 +0000 (0:00:00.074) 0:00:00.254 ********** 2026-03-23 00:25:07.755429 | orchestrator | ok: [testbed-manager] 2026-03-23 00:25:07.755451 | orchestrator | 2026-03-23 00:25:07.755471 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-23 00:25:07.755489 | orchestrator | Monday 23 March 2026 00:23:20 +0000 (0:00:01.978) 0:00:02.232 ********** 2026-03-23 00:25:07.755510 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-23 00:25:07.755530 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-23 00:25:07.755550 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-23 00:25:07.755562 | orchestrator | 2026-03-23 00:25:07.755573 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-23 00:25:07.755585 | orchestrator | Monday 23 March 2026 00:23:21 +0000 (0:00:01.063) 0:00:03.295 ********** 2026-03-23 00:25:07.755596 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-23 00:25:07.755607 | orchestrator | 2026-03-23 00:25:07.755618 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-23 00:25:07.755630 | orchestrator | Monday 23 March 2026 00:23:22 +0000 (0:00:01.010) 0:00:04.306 ********** 2026-03-23 00:25:07.755640 | orchestrator | ok: [testbed-manager] 2026-03-23 00:25:07.755652 | orchestrator | 2026-03-23 00:25:07.755662 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-23 00:25:07.755690 | orchestrator | Monday 23 March 2026 00:23:22 +0000 (0:00:00.341) 0:00:04.647 ********** 2026-03-23 00:25:07.755702 | orchestrator | changed: [testbed-manager] 2026-03-23 00:25:07.755713 | orchestrator | 2026-03-23 00:25:07.755724 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-23 00:25:07.755738 | orchestrator | Monday 23 March 2026 00:23:23 +0000 (0:00:00.895) 0:00:05.543 ********** 2026-03-23 00:25:07.755751 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-23 00:25:07.755764 | orchestrator | ok: [testbed-manager] 2026-03-23 00:25:07.755776 | orchestrator | 2026-03-23 00:25:07.755789 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-23 00:25:07.755803 | orchestrator | Monday 23 March 2026 00:23:54 +0000 (0:00:31.022) 0:00:36.565 ********** 2026-03-23 00:25:07.755815 | orchestrator | changed: [testbed-manager] 2026-03-23 00:25:07.755828 | orchestrator | 2026-03-23 00:25:07.755841 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-23 00:25:07.755855 | orchestrator | Monday 23 March 2026 00:24:06 +0000 (0:00:12.287) 0:00:48.852 ********** 2026-03-23 00:25:07.755867 | orchestrator | Pausing for 60 seconds 2026-03-23 00:25:07.755880 | orchestrator | changed: [testbed-manager] 2026-03-23 00:25:07.755893 | orchestrator | 2026-03-23 00:25:07.755906 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-23 00:25:07.755919 | orchestrator | Monday 23 March 2026 00:25:06 +0000 (0:01:00.078) 0:01:48.930 ********** 2026-03-23 00:25:07.755932 | orchestrator | ok: [testbed-manager] 2026-03-23 00:25:07.755945 | orchestrator | 2026-03-23 00:25:07.756021 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-23 00:25:07.756057 | orchestrator | Monday 23 March 2026 00:25:07 +0000 (0:00:00.062) 0:01:48.993 ********** 2026-03-23 00:25:07.756069 | orchestrator | changed: [testbed-manager] 2026-03-23 00:25:07.756080 | orchestrator | 2026-03-23 00:25:07.756091 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:25:07.756102 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:25:07.756114 | orchestrator | 2026-03-23 00:25:07.756124 | orchestrator | 2026-03-23 00:25:07.756136 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:25:07.756147 | orchestrator | Monday 23 March 2026 00:25:07 +0000 (0:00:00.551) 0:01:49.544 ********** 2026-03-23 00:25:07.756158 | orchestrator | =============================================================================== 2026-03-23 00:25:07.756169 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-03-23 00:25:07.756180 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.02s 2026-03-23 00:25:07.756191 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.29s 2026-03-23 00:25:07.756202 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.98s 2026-03-23 00:25:07.756213 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.06s 2026-03-23 00:25:07.756223 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.01s 2026-03-23 00:25:07.756234 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.89s 2026-03-23 00:25:07.756245 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.55s 2026-03-23 00:25:07.756256 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-03-23 00:25:07.756267 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-03-23 00:25:07.756278 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-03-23 00:25:07.880715 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-23 00:25:07.880783 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-23 00:25:07.883868 | orchestrator | + set -e 2026-03-23 00:25:07.883888 | orchestrator | + NAMESPACE=kolla 2026-03-23 00:25:07.883897 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-23 00:25:07.889444 | orchestrator | ++ semver latest 9.0.0 2026-03-23 00:25:07.930101 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-23 00:25:07.930187 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-23 00:25:07.930385 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-23 00:25:19.143362 | orchestrator | 2026-03-23 00:25:19 | INFO  | Prepare task for execution of operator. 2026-03-23 00:25:19.221591 | orchestrator | 2026-03-23 00:25:19 | INFO  | Task 617e1d49-2e96-424c-92de-d3119978c0ba (operator) was prepared for execution. 2026-03-23 00:25:19.221680 | orchestrator | 2026-03-23 00:25:19 | INFO  | It takes a moment until task 617e1d49-2e96-424c-92de-d3119978c0ba (operator) has been started and output is visible here. 2026-03-23 00:25:35.379691 | orchestrator | 2026-03-23 00:25:35.379768 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-23 00:25:35.379778 | orchestrator | 2026-03-23 00:25:35.379785 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-23 00:25:35.379792 | orchestrator | Monday 23 March 2026 00:25:22 +0000 (0:00:00.167) 0:00:00.167 ********** 2026-03-23 00:25:35.379798 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:25:35.379807 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:25:35.379813 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:25:35.379820 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:25:35.379826 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:25:35.379833 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:25:35.379842 | orchestrator | 2026-03-23 00:25:35.379849 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-23 00:25:35.379874 | orchestrator | Monday 23 March 2026 00:25:25 +0000 (0:00:03.306) 0:00:03.474 ********** 2026-03-23 00:25:35.379879 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:25:35.379882 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:25:35.379886 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:25:35.379890 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:25:35.379894 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:25:35.379898 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:25:35.379902 | orchestrator | 2026-03-23 00:25:35.379906 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-23 00:25:35.379910 | orchestrator | 2026-03-23 00:25:35.379914 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-23 00:25:35.379918 | orchestrator | Monday 23 March 2026 00:25:27 +0000 (0:00:01.758) 0:00:05.233 ********** 2026-03-23 00:25:35.379922 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:25:35.379926 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:25:35.379929 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:25:35.379933 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:25:35.379937 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:25:35.379941 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:25:35.379944 | orchestrator | 2026-03-23 00:25:35.379948 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-23 00:25:35.379964 | orchestrator | Monday 23 March 2026 00:25:27 +0000 (0:00:00.135) 0:00:05.368 ********** 2026-03-23 00:25:35.379988 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:25:35.379992 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:25:35.379996 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:25:35.380000 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:25:35.380004 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:25:35.380008 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:25:35.380011 | orchestrator | 2026-03-23 00:25:35.380015 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-23 00:25:35.380019 | orchestrator | Monday 23 March 2026 00:25:27 +0000 (0:00:00.146) 0:00:05.515 ********** 2026-03-23 00:25:35.380023 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:25:35.380028 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:25:35.380031 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:25:35.380035 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:25:35.380039 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:25:35.380043 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:25:35.380047 | orchestrator | 2026-03-23 00:25:35.380051 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-23 00:25:35.380054 | orchestrator | Monday 23 March 2026 00:25:28 +0000 (0:00:00.793) 0:00:06.308 ********** 2026-03-23 00:25:35.380058 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:25:35.380062 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:25:35.380066 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:25:35.380070 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:25:35.380073 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:25:35.380077 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:25:35.380081 | orchestrator | 2026-03-23 00:25:35.380085 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-23 00:25:35.380089 | orchestrator | Monday 23 March 2026 00:25:29 +0000 (0:00:01.040) 0:00:07.349 ********** 2026-03-23 00:25:35.380093 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-23 00:25:35.380097 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-23 00:25:35.380101 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-23 00:25:35.380105 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-23 00:25:35.380109 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-23 00:25:35.380112 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-23 00:25:35.380116 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-23 00:25:35.380120 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-23 00:25:35.380124 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-23 00:25:35.380131 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-23 00:25:35.380135 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-23 00:25:35.380139 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-23 00:25:35.380143 | orchestrator | 2026-03-23 00:25:35.380147 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-23 00:25:35.380151 | orchestrator | Monday 23 March 2026 00:25:30 +0000 (0:00:01.219) 0:00:08.568 ********** 2026-03-23 00:25:35.380154 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:25:35.380158 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:25:35.380162 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:25:35.380166 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:25:35.380169 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:25:35.380173 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:25:35.380177 | orchestrator | 2026-03-23 00:25:35.380181 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-23 00:25:35.380186 | orchestrator | Monday 23 March 2026 00:25:32 +0000 (0:00:01.445) 0:00:10.014 ********** 2026-03-23 00:25:35.380190 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-23 00:25:35.380194 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-23 00:25:35.380198 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-23 00:25:35.380202 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-23 00:25:35.380206 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-23 00:25:35.380221 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-23 00:25:35.380225 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-23 00:25:35.380229 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-23 00:25:35.380233 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-23 00:25:35.380236 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-23 00:25:35.380240 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-23 00:25:35.380244 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-23 00:25:35.380248 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-23 00:25:35.380252 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-23 00:25:35.380255 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-23 00:25:35.380263 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-23 00:25:35.380267 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-23 00:25:35.380272 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-23 00:25:35.380276 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-23 00:25:35.380280 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-23 00:25:35.380285 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-23 00:25:35.380289 | orchestrator | 2026-03-23 00:25:35.380293 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-23 00:25:35.380298 | orchestrator | Monday 23 March 2026 00:25:33 +0000 (0:00:01.280) 0:00:11.294 ********** 2026-03-23 00:25:35.380303 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:25:35.380307 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:25:35.380312 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:25:35.380316 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:25:35.380320 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:25:35.380324 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:25:35.380328 | orchestrator | 2026-03-23 00:25:35.380333 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-23 00:25:35.380341 | orchestrator | Monday 23 March 2026 00:25:33 +0000 (0:00:00.145) 0:00:11.440 ********** 2026-03-23 00:25:35.380345 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:25:35.380349 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:25:35.380353 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:25:35.380357 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:25:35.380361 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:25:35.380365 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:25:35.380370 | orchestrator | 2026-03-23 00:25:35.380374 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-23 00:25:35.380378 | orchestrator | Monday 23 March 2026 00:25:33 +0000 (0:00:00.165) 0:00:11.605 ********** 2026-03-23 00:25:35.380382 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:25:35.380387 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:25:35.380391 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:25:35.380395 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:25:35.380399 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:25:35.380404 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:25:35.380408 | orchestrator | 2026-03-23 00:25:35.380412 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-23 00:25:35.380416 | orchestrator | Monday 23 March 2026 00:25:34 +0000 (0:00:00.572) 0:00:12.178 ********** 2026-03-23 00:25:35.380420 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:25:35.380425 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:25:35.380429 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:25:35.380433 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:25:35.380437 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:25:35.380441 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:25:35.380446 | orchestrator | 2026-03-23 00:25:35.380450 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-23 00:25:35.380454 | orchestrator | Monday 23 March 2026 00:25:34 +0000 (0:00:00.161) 0:00:12.339 ********** 2026-03-23 00:25:35.380459 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-23 00:25:35.380463 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-23 00:25:35.380467 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:25:35.380471 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:25:35.380476 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-23 00:25:35.380480 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:25:35.380484 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-23 00:25:35.380489 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:25:35.380493 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-23 00:25:35.380497 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:25:35.380501 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-23 00:25:35.380506 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:25:35.380510 | orchestrator | 2026-03-23 00:25:35.380514 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-23 00:25:35.380519 | orchestrator | Monday 23 March 2026 00:25:35 +0000 (0:00:00.756) 0:00:13.095 ********** 2026-03-23 00:25:35.380523 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:25:35.380527 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:25:35.380531 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:25:35.380535 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:25:35.380539 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:25:35.380544 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:25:35.380548 | orchestrator | 2026-03-23 00:25:35.380552 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-23 00:25:35.380556 | orchestrator | Monday 23 March 2026 00:25:35 +0000 (0:00:00.135) 0:00:13.231 ********** 2026-03-23 00:25:35.380560 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:25:35.380565 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:25:35.380569 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:25:35.380573 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:25:35.380584 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:25:36.569862 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:25:36.569963 | orchestrator | 2026-03-23 00:25:36.570111 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-23 00:25:36.570127 | orchestrator | Monday 23 March 2026 00:25:35 +0000 (0:00:00.127) 0:00:13.358 ********** 2026-03-23 00:25:36.570138 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:25:36.570149 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:25:36.570196 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:25:36.570208 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:25:36.570219 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:25:36.570230 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:25:36.570241 | orchestrator | 2026-03-23 00:25:36.570252 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-23 00:25:36.570263 | orchestrator | Monday 23 March 2026 00:25:35 +0000 (0:00:00.130) 0:00:13.488 ********** 2026-03-23 00:25:36.570274 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:25:36.570285 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:25:36.570296 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:25:36.570307 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:25:36.570317 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:25:36.570328 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:25:36.570339 | orchestrator | 2026-03-23 00:25:36.570350 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-23 00:25:36.570361 | orchestrator | Monday 23 March 2026 00:25:36 +0000 (0:00:00.647) 0:00:14.136 ********** 2026-03-23 00:25:36.570372 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:25:36.570383 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:25:36.570393 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:25:36.570404 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:25:36.570417 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:25:36.570429 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:25:36.570442 | orchestrator | 2026-03-23 00:25:36.570454 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:25:36.570491 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 00:25:36.570505 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 00:25:36.570522 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 00:25:36.570542 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 00:25:36.570561 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 00:25:36.570579 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 00:25:36.570599 | orchestrator | 2026-03-23 00:25:36.570612 | orchestrator | 2026-03-23 00:25:36.570623 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:25:36.570634 | orchestrator | Monday 23 March 2026 00:25:36 +0000 (0:00:00.223) 0:00:14.359 ********** 2026-03-23 00:25:36.570645 | orchestrator | =============================================================================== 2026-03-23 00:25:36.570656 | orchestrator | Gathering Facts --------------------------------------------------------- 3.31s 2026-03-23 00:25:36.570667 | orchestrator | Do not require tty for all users ---------------------------------------- 1.76s 2026-03-23 00:25:36.570678 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.45s 2026-03-23 00:25:36.570716 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2026-03-23 00:25:36.570728 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.22s 2026-03-23 00:25:36.570739 | orchestrator | osism.commons.operator : Create user ------------------------------------ 1.04s 2026-03-23 00:25:36.570750 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.79s 2026-03-23 00:25:36.570761 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.76s 2026-03-23 00:25:36.570772 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2026-03-23 00:25:36.570782 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2026-03-23 00:25:36.570793 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2026-03-23 00:25:36.570804 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-03-23 00:25:36.570815 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-03-23 00:25:36.570826 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-03-23 00:25:36.570837 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-03-23 00:25:36.570848 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2026-03-23 00:25:36.570858 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2026-03-23 00:25:36.570869 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-03-23 00:25:36.570880 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2026-03-23 00:25:36.760907 | orchestrator | + osism apply --environment custom facts 2026-03-23 00:25:37.961266 | orchestrator | 2026-03-23 00:25:37 | INFO  | Trying to run play facts in environment custom 2026-03-23 00:25:48.042833 | orchestrator | 2026-03-23 00:25:48 | INFO  | Prepare task for execution of facts. 2026-03-23 00:25:48.117531 | orchestrator | 2026-03-23 00:25:48 | INFO  | Task 0a5ca28e-6323-46c1-a726-27fec30c57f5 (facts) was prepared for execution. 2026-03-23 00:25:48.117635 | orchestrator | 2026-03-23 00:25:48 | INFO  | It takes a moment until task 0a5ca28e-6323-46c1-a726-27fec30c57f5 (facts) has been started and output is visible here. 2026-03-23 00:26:33.070627 | orchestrator | 2026-03-23 00:26:33.070769 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-23 00:26:33.070797 | orchestrator | 2026-03-23 00:26:33.070817 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-23 00:26:33.070857 | orchestrator | Monday 23 March 2026 00:25:51 +0000 (0:00:00.116) 0:00:00.116 ********** 2026-03-23 00:26:33.070878 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:26:33.070898 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:26:33.070915 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:26:33.070932 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:26:33.070948 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:26:33.070964 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:26:33.070981 | orchestrator | ok: [testbed-manager] 2026-03-23 00:26:33.070999 | orchestrator | 2026-03-23 00:26:33.071112 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-23 00:26:33.071154 | orchestrator | Monday 23 March 2026 00:25:52 +0000 (0:00:01.414) 0:00:01.531 ********** 2026-03-23 00:26:33.071175 | orchestrator | ok: [testbed-manager] 2026-03-23 00:26:33.071193 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:26:33.071212 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:26:33.071230 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:26:33.071250 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:26:33.071285 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:26:33.071304 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:26:33.071321 | orchestrator | 2026-03-23 00:26:33.071368 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-23 00:26:33.071386 | orchestrator | 2026-03-23 00:26:33.071403 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-23 00:26:33.071420 | orchestrator | Monday 23 March 2026 00:25:53 +0000 (0:00:01.256) 0:00:02.787 ********** 2026-03-23 00:26:33.071436 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:26:33.071453 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:26:33.071469 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:26:33.071486 | orchestrator | 2026-03-23 00:26:33.071502 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-23 00:26:33.071521 | orchestrator | Monday 23 March 2026 00:25:53 +0000 (0:00:00.093) 0:00:02.881 ********** 2026-03-23 00:26:33.071538 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:26:33.071554 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:26:33.071570 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:26:33.071586 | orchestrator | 2026-03-23 00:26:33.071602 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-23 00:26:33.071619 | orchestrator | Monday 23 March 2026 00:25:54 +0000 (0:00:00.200) 0:00:03.082 ********** 2026-03-23 00:26:33.071635 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:26:33.071652 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:26:33.071668 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:26:33.071684 | orchestrator | 2026-03-23 00:26:33.071701 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-23 00:26:33.071718 | orchestrator | Monday 23 March 2026 00:25:54 +0000 (0:00:00.205) 0:00:03.287 ********** 2026-03-23 00:26:33.071736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:26:33.071754 | orchestrator | 2026-03-23 00:26:33.071771 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-23 00:26:33.071788 | orchestrator | Monday 23 March 2026 00:25:54 +0000 (0:00:00.113) 0:00:03.401 ********** 2026-03-23 00:26:33.071805 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:26:33.071823 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:26:33.071839 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:26:33.071855 | orchestrator | 2026-03-23 00:26:33.071872 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-23 00:26:33.071890 | orchestrator | Monday 23 March 2026 00:25:54 +0000 (0:00:00.448) 0:00:03.849 ********** 2026-03-23 00:26:33.071907 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:26:33.071925 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:26:33.071962 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:26:33.071981 | orchestrator | 2026-03-23 00:26:33.072000 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-23 00:26:33.072046 | orchestrator | Monday 23 March 2026 00:25:54 +0000 (0:00:00.110) 0:00:03.960 ********** 2026-03-23 00:26:33.072064 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:26:33.072082 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:26:33.072100 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:26:33.072119 | orchestrator | 2026-03-23 00:26:33.072137 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-23 00:26:33.072155 | orchestrator | Monday 23 March 2026 00:25:56 +0000 (0:00:01.120) 0:00:05.080 ********** 2026-03-23 00:26:33.072171 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:26:33.072187 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:26:33.072203 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:26:33.072221 | orchestrator | 2026-03-23 00:26:33.072237 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-23 00:26:33.072255 | orchestrator | Monday 23 March 2026 00:25:56 +0000 (0:00:00.467) 0:00:05.548 ********** 2026-03-23 00:26:33.072272 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:26:33.072288 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:26:33.072305 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:26:33.072339 | orchestrator | 2026-03-23 00:26:33.072373 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-23 00:26:33.072390 | orchestrator | Monday 23 March 2026 00:25:57 +0000 (0:00:01.067) 0:00:06.616 ********** 2026-03-23 00:26:33.072406 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:26:33.072423 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:26:33.072440 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:26:33.072457 | orchestrator | 2026-03-23 00:26:33.072475 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-23 00:26:33.072490 | orchestrator | Monday 23 March 2026 00:26:14 +0000 (0:00:17.177) 0:00:23.794 ********** 2026-03-23 00:26:33.072508 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:26:33.072525 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:26:33.072542 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:26:33.072559 | orchestrator | 2026-03-23 00:26:33.072576 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-23 00:26:33.072621 | orchestrator | Monday 23 March 2026 00:26:14 +0000 (0:00:00.092) 0:00:23.886 ********** 2026-03-23 00:26:33.072637 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:26:33.072654 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:26:33.072668 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:26:33.072682 | orchestrator | 2026-03-23 00:26:33.072698 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-23 00:26:33.072714 | orchestrator | Monday 23 March 2026 00:26:23 +0000 (0:00:08.845) 0:00:32.732 ********** 2026-03-23 00:26:33.072729 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:26:33.072744 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:26:33.072759 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:26:33.072774 | orchestrator | 2026-03-23 00:26:33.072789 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-23 00:26:33.072804 | orchestrator | Monday 23 March 2026 00:26:24 +0000 (0:00:00.479) 0:00:33.212 ********** 2026-03-23 00:26:33.072820 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-23 00:26:33.072836 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-23 00:26:33.072850 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-23 00:26:33.072865 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-23 00:26:33.072880 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-23 00:26:33.072895 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-23 00:26:33.072910 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-23 00:26:33.072924 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-23 00:26:33.072939 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-23 00:26:33.072954 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-23 00:26:33.072969 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-23 00:26:33.072983 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-23 00:26:33.072997 | orchestrator | 2026-03-23 00:26:33.073034 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-23 00:26:33.073050 | orchestrator | Monday 23 March 2026 00:26:28 +0000 (0:00:03.779) 0:00:36.992 ********** 2026-03-23 00:26:33.073065 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:26:33.073080 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:26:33.073095 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:26:33.073110 | orchestrator | 2026-03-23 00:26:33.073125 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-23 00:26:33.073140 | orchestrator | 2026-03-23 00:26:33.073155 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-23 00:26:33.073225 | orchestrator | Monday 23 March 2026 00:26:29 +0000 (0:00:01.307) 0:00:38.299 ********** 2026-03-23 00:26:33.073257 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:26:33.073285 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:26:33.073301 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:26:33.073316 | orchestrator | ok: [testbed-manager] 2026-03-23 00:26:33.073331 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:26:33.073346 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:26:33.073360 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:26:33.073375 | orchestrator | 2026-03-23 00:26:33.073391 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:26:33.073408 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:26:33.073424 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:26:33.073441 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:26:33.073456 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:26:33.073471 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:26:33.073486 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:26:33.073570 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:26:33.073588 | orchestrator | 2026-03-23 00:26:33.073604 | orchestrator | 2026-03-23 00:26:33.073620 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:26:33.073637 | orchestrator | Monday 23 March 2026 00:26:33 +0000 (0:00:03.721) 0:00:42.020 ********** 2026-03-23 00:26:33.073652 | orchestrator | =============================================================================== 2026-03-23 00:26:33.073668 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.18s 2026-03-23 00:26:33.073684 | orchestrator | Install required packages (Debian) -------------------------------------- 8.85s 2026-03-23 00:26:33.073699 | orchestrator | Copy fact files --------------------------------------------------------- 3.78s 2026-03-23 00:26:33.073715 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.72s 2026-03-23 00:26:33.073731 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2026-03-23 00:26:33.073747 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.31s 2026-03-23 00:26:33.073778 | orchestrator | Copy fact file ---------------------------------------------------------- 1.26s 2026-03-23 00:26:33.240960 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.12s 2026-03-23 00:26:33.241130 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2026-03-23 00:26:33.241149 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-03-23 00:26:33.241162 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-03-23 00:26:33.241173 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-03-23 00:26:33.241184 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2026-03-23 00:26:33.241195 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-03-23 00:26:33.241206 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2026-03-23 00:26:33.241218 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2026-03-23 00:26:33.241229 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-03-23 00:26:33.241239 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-03-23 00:26:33.404926 | orchestrator | + osism apply bootstrap 2026-03-23 00:26:44.712850 | orchestrator | 2026-03-23 00:26:44 | INFO  | Prepare task for execution of bootstrap. 2026-03-23 00:26:44.789116 | orchestrator | 2026-03-23 00:26:44 | INFO  | Task 9183164a-8a6d-4a6e-b769-6cf6ceb987f6 (bootstrap) was prepared for execution. 2026-03-23 00:26:44.789218 | orchestrator | 2026-03-23 00:26:44 | INFO  | It takes a moment until task 9183164a-8a6d-4a6e-b769-6cf6ceb987f6 (bootstrap) has been started and output is visible here. 2026-03-23 00:26:59.569505 | orchestrator | 2026-03-23 00:26:59.569619 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-23 00:26:59.569635 | orchestrator | 2026-03-23 00:26:59.569648 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-23 00:26:59.569660 | orchestrator | Monday 23 March 2026 00:26:47 +0000 (0:00:00.141) 0:00:00.141 ********** 2026-03-23 00:26:59.569671 | orchestrator | ok: [testbed-manager] 2026-03-23 00:26:59.569684 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:26:59.569695 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:26:59.569706 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:26:59.569717 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:26:59.569728 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:26:59.569739 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:26:59.569750 | orchestrator | 2026-03-23 00:26:59.569761 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-23 00:26:59.569772 | orchestrator | 2026-03-23 00:26:59.569783 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-23 00:26:59.569795 | orchestrator | Monday 23 March 2026 00:26:47 +0000 (0:00:00.214) 0:00:00.356 ********** 2026-03-23 00:26:59.569806 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:26:59.569817 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:26:59.569828 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:26:59.569840 | orchestrator | ok: [testbed-manager] 2026-03-23 00:26:59.569850 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:26:59.569861 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:26:59.569872 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:26:59.569883 | orchestrator | 2026-03-23 00:26:59.569894 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-23 00:26:59.569905 | orchestrator | 2026-03-23 00:26:59.569916 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-23 00:26:59.569927 | orchestrator | Monday 23 March 2026 00:26:52 +0000 (0:00:04.653) 0:00:05.009 ********** 2026-03-23 00:26:59.569939 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-23 00:26:59.569950 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-23 00:26:59.569961 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-23 00:26:59.569972 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-23 00:26:59.569983 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-23 00:26:59.569994 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-23 00:26:59.570005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-23 00:26:59.570123 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-23 00:26:59.570142 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-23 00:26:59.570156 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-23 00:26:59.570169 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-23 00:26:59.570181 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-23 00:26:59.570195 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-23 00:26:59.570207 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-23 00:26:59.570220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-23 00:26:59.570233 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-23 00:26:59.570268 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-23 00:26:59.570280 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-23 00:26:59.570291 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:26:59.570302 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-23 00:26:59.570313 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-23 00:26:59.570324 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-23 00:26:59.570335 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-23 00:26:59.570345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-23 00:26:59.570356 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:26:59.570367 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-23 00:26:59.570378 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-23 00:26:59.570390 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-23 00:26:59.570401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-23 00:26:59.570412 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-23 00:26:59.570423 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-23 00:26:59.570433 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-23 00:26:59.570445 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-23 00:26:59.570456 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-23 00:26:59.570467 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:26:59.570477 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-23 00:26:59.570488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-23 00:26:59.570499 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:26:59.570511 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-23 00:26:59.570522 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-23 00:26:59.570532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-23 00:26:59.570544 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-23 00:26:59.570554 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-23 00:26:59.570565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:26:59.570576 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-23 00:26:59.570588 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-23 00:26:59.570617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:26:59.570629 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-23 00:26:59.570640 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-23 00:26:59.570650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:26:59.570661 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:26:59.570672 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-23 00:26:59.570683 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-23 00:26:59.570694 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:26:59.570706 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-23 00:26:59.570716 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:26:59.570727 | orchestrator | 2026-03-23 00:26:59.570739 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-23 00:26:59.570750 | orchestrator | 2026-03-23 00:26:59.570761 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-23 00:26:59.570772 | orchestrator | Monday 23 March 2026 00:26:52 +0000 (0:00:00.413) 0:00:05.423 ********** 2026-03-23 00:26:59.570783 | orchestrator | ok: [testbed-manager] 2026-03-23 00:26:59.570794 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:26:59.570813 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:26:59.570824 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:26:59.570835 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:26:59.570846 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:26:59.570856 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:26:59.570867 | orchestrator | 2026-03-23 00:26:59.570879 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-23 00:26:59.570890 | orchestrator | Monday 23 March 2026 00:26:54 +0000 (0:00:01.275) 0:00:06.698 ********** 2026-03-23 00:26:59.570901 | orchestrator | ok: [testbed-manager] 2026-03-23 00:26:59.570912 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:26:59.570923 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:26:59.570933 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:26:59.570944 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:26:59.570955 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:26:59.570966 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:26:59.570977 | orchestrator | 2026-03-23 00:26:59.570988 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-23 00:26:59.570999 | orchestrator | Monday 23 March 2026 00:26:55 +0000 (0:00:01.193) 0:00:07.892 ********** 2026-03-23 00:26:59.571011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:26:59.571078 | orchestrator | 2026-03-23 00:26:59.571091 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-23 00:26:59.571102 | orchestrator | Monday 23 March 2026 00:26:55 +0000 (0:00:00.275) 0:00:08.168 ********** 2026-03-23 00:26:59.571113 | orchestrator | changed: [testbed-manager] 2026-03-23 00:26:59.571125 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:26:59.571136 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:26:59.571147 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:26:59.571158 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:26:59.571169 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:26:59.571180 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:26:59.571191 | orchestrator | 2026-03-23 00:26:59.571203 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-23 00:26:59.571214 | orchestrator | Monday 23 March 2026 00:26:57 +0000 (0:00:01.549) 0:00:09.717 ********** 2026-03-23 00:26:59.571225 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:26:59.571238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:26:59.571251 | orchestrator | 2026-03-23 00:26:59.571262 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-23 00:26:59.571291 | orchestrator | Monday 23 March 2026 00:26:57 +0000 (0:00:00.269) 0:00:09.987 ********** 2026-03-23 00:26:59.571303 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:26:59.571314 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:26:59.571325 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:26:59.571341 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:26:59.571352 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:26:59.571363 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:26:59.571374 | orchestrator | 2026-03-23 00:26:59.571385 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-23 00:26:59.571397 | orchestrator | Monday 23 March 2026 00:26:58 +0000 (0:00:00.998) 0:00:10.986 ********** 2026-03-23 00:26:59.571408 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:26:59.571419 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:26:59.571430 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:26:59.571441 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:26:59.571452 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:26:59.571463 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:26:59.571481 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:26:59.571493 | orchestrator | 2026-03-23 00:26:59.571504 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-23 00:26:59.571515 | orchestrator | Monday 23 March 2026 00:26:59 +0000 (0:00:00.588) 0:00:11.574 ********** 2026-03-23 00:26:59.571526 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:26:59.571537 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:26:59.571548 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:26:59.571559 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:26:59.571570 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:26:59.571581 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:26:59.571592 | orchestrator | ok: [testbed-manager] 2026-03-23 00:26:59.571604 | orchestrator | 2026-03-23 00:26:59.571615 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-23 00:26:59.571628 | orchestrator | Monday 23 March 2026 00:26:59 +0000 (0:00:00.415) 0:00:11.989 ********** 2026-03-23 00:26:59.571639 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:26:59.571650 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:26:59.571669 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:27:12.230895 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:27:12.231019 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:27:12.231086 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:27:12.231095 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:27:12.231104 | orchestrator | 2026-03-23 00:27:12.231114 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-23 00:27:12.231124 | orchestrator | Monday 23 March 2026 00:26:59 +0000 (0:00:00.231) 0:00:12.221 ********** 2026-03-23 00:27:12.231135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:27:12.231164 | orchestrator | 2026-03-23 00:27:12.231173 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-23 00:27:12.231183 | orchestrator | Monday 23 March 2026 00:26:59 +0000 (0:00:00.306) 0:00:12.527 ********** 2026-03-23 00:27:12.231192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:27:12.231200 | orchestrator | 2026-03-23 00:27:12.231208 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-23 00:27:12.231216 | orchestrator | Monday 23 March 2026 00:27:00 +0000 (0:00:00.277) 0:00:12.805 ********** 2026-03-23 00:27:12.231224 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:12.231234 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:12.231242 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:12.231249 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:12.231257 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:12.231265 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:12.231273 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:12.231281 | orchestrator | 2026-03-23 00:27:12.231289 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-23 00:27:12.231297 | orchestrator | Monday 23 March 2026 00:27:01 +0000 (0:00:01.407) 0:00:14.212 ********** 2026-03-23 00:27:12.231305 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:27:12.231314 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:27:12.231322 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:27:12.231330 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:27:12.231337 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:27:12.231345 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:27:12.231353 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:27:12.231361 | orchestrator | 2026-03-23 00:27:12.231369 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-23 00:27:12.231402 | orchestrator | Monday 23 March 2026 00:27:01 +0000 (0:00:00.216) 0:00:14.429 ********** 2026-03-23 00:27:12.231412 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:12.231421 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:12.231430 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:12.231438 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:12.231447 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:12.231456 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:12.231465 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:12.231474 | orchestrator | 2026-03-23 00:27:12.231483 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-23 00:27:12.231503 | orchestrator | Monday 23 March 2026 00:27:03 +0000 (0:00:01.395) 0:00:15.824 ********** 2026-03-23 00:27:12.231512 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:27:12.231521 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:27:12.231530 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:27:12.231539 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:27:12.231548 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:27:12.231556 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:27:12.231565 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:27:12.231573 | orchestrator | 2026-03-23 00:27:12.231582 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-23 00:27:12.231592 | orchestrator | Monday 23 March 2026 00:27:03 +0000 (0:00:00.241) 0:00:16.065 ********** 2026-03-23 00:27:12.231601 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:12.231618 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:27:12.231628 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:27:12.231637 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:27:12.231646 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:27:12.231655 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:27:12.231664 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:27:12.231673 | orchestrator | 2026-03-23 00:27:12.231682 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-23 00:27:12.231691 | orchestrator | Monday 23 March 2026 00:27:04 +0000 (0:00:00.544) 0:00:16.609 ********** 2026-03-23 00:27:12.231700 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:12.231708 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:27:12.231718 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:27:12.231727 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:27:12.231736 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:27:12.231745 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:27:12.231754 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:27:12.231761 | orchestrator | 2026-03-23 00:27:12.231769 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-23 00:27:12.231777 | orchestrator | Monday 23 March 2026 00:27:05 +0000 (0:00:01.125) 0:00:17.735 ********** 2026-03-23 00:27:12.231785 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:12.231793 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:12.231801 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:12.231809 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:12.231817 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:12.231824 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:12.231832 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:12.231840 | orchestrator | 2026-03-23 00:27:12.231848 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-23 00:27:12.231856 | orchestrator | Monday 23 March 2026 00:27:06 +0000 (0:00:01.059) 0:00:18.794 ********** 2026-03-23 00:27:12.231881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:27:12.231890 | orchestrator | 2026-03-23 00:27:12.231898 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-23 00:27:12.231906 | orchestrator | Monday 23 March 2026 00:27:06 +0000 (0:00:00.298) 0:00:19.092 ********** 2026-03-23 00:27:12.231923 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:27:12.231931 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:27:12.231939 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:27:12.231946 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:27:12.231954 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:27:12.231962 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:27:12.231969 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:27:12.231977 | orchestrator | 2026-03-23 00:27:12.231985 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-23 00:27:12.231993 | orchestrator | Monday 23 March 2026 00:27:07 +0000 (0:00:01.380) 0:00:20.472 ********** 2026-03-23 00:27:12.232001 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:12.232009 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:12.232017 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:12.232024 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:12.232048 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:12.232056 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:12.232064 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:12.232072 | orchestrator | 2026-03-23 00:27:12.232079 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-23 00:27:12.232087 | orchestrator | Monday 23 March 2026 00:27:08 +0000 (0:00:00.208) 0:00:20.681 ********** 2026-03-23 00:27:12.232095 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:12.232103 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:12.232111 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:12.232118 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:12.232126 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:12.232134 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:12.232141 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:12.232149 | orchestrator | 2026-03-23 00:27:12.232157 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-23 00:27:12.232165 | orchestrator | Monday 23 March 2026 00:27:08 +0000 (0:00:00.211) 0:00:20.892 ********** 2026-03-23 00:27:12.232172 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:12.232180 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:12.232188 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:12.232195 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:12.232203 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:12.232211 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:12.232219 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:12.232226 | orchestrator | 2026-03-23 00:27:12.232234 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-23 00:27:12.232242 | orchestrator | Monday 23 March 2026 00:27:08 +0000 (0:00:00.209) 0:00:21.102 ********** 2026-03-23 00:27:12.232251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:27:12.232261 | orchestrator | 2026-03-23 00:27:12.232269 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-23 00:27:12.232277 | orchestrator | Monday 23 March 2026 00:27:08 +0000 (0:00:00.265) 0:00:21.368 ********** 2026-03-23 00:27:12.232284 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:12.232292 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:12.232300 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:12.232307 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:12.232315 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:12.232323 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:12.232330 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:12.232338 | orchestrator | 2026-03-23 00:27:12.232346 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-23 00:27:12.232354 | orchestrator | Monday 23 March 2026 00:27:09 +0000 (0:00:00.539) 0:00:21.907 ********** 2026-03-23 00:27:12.232361 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:27:12.232382 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:27:12.232396 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:27:12.232409 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:27:12.232425 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:27:12.232444 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:27:12.232457 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:27:12.232470 | orchestrator | 2026-03-23 00:27:12.232484 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-23 00:27:12.232498 | orchestrator | Monday 23 March 2026 00:27:09 +0000 (0:00:00.213) 0:00:22.120 ********** 2026-03-23 00:27:12.232512 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:12.232526 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:27:12.232540 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:12.232554 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:27:12.232568 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:12.232583 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:12.232597 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:27:12.232613 | orchestrator | 2026-03-23 00:27:12.232629 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-23 00:27:12.232644 | orchestrator | Monday 23 March 2026 00:27:10 +0000 (0:00:01.048) 0:00:23.169 ********** 2026-03-23 00:27:12.232659 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:12.232674 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:12.232690 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:12.232719 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:12.232734 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:12.232749 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:12.232763 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:12.232777 | orchestrator | 2026-03-23 00:27:12.232792 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-23 00:27:12.232807 | orchestrator | Monday 23 March 2026 00:27:11 +0000 (0:00:00.622) 0:00:23.792 ********** 2026-03-23 00:27:12.232822 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:12.232836 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:12.232851 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:12.232867 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:27:12.232892 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:53.274502 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:27:53.274629 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:27:53.274645 | orchestrator | 2026-03-23 00:27:53.274659 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-23 00:27:53.274673 | orchestrator | Monday 23 March 2026 00:27:12 +0000 (0:00:01.090) 0:00:24.882 ********** 2026-03-23 00:27:53.274684 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:53.274696 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:53.274707 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:53.274718 | orchestrator | changed: [testbed-manager] 2026-03-23 00:27:53.274729 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:27:53.274741 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:27:53.274752 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:27:53.274763 | orchestrator | 2026-03-23 00:27:53.274774 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-23 00:27:53.274786 | orchestrator | Monday 23 March 2026 00:27:29 +0000 (0:00:17.149) 0:00:42.032 ********** 2026-03-23 00:27:53.274798 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:53.274810 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:53.274821 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:53.274832 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:53.274843 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:53.274854 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:53.274865 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:53.274876 | orchestrator | 2026-03-23 00:27:53.274887 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-23 00:27:53.274898 | orchestrator | Monday 23 March 2026 00:27:29 +0000 (0:00:00.206) 0:00:42.238 ********** 2026-03-23 00:27:53.274909 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:53.274946 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:53.274958 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:53.274968 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:53.274979 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:53.274990 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:53.275003 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:53.275016 | orchestrator | 2026-03-23 00:27:53.275029 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-23 00:27:53.275041 | orchestrator | Monday 23 March 2026 00:27:29 +0000 (0:00:00.200) 0:00:42.439 ********** 2026-03-23 00:27:53.275053 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:53.275091 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:53.275103 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:53.275116 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:53.275127 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:53.275140 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:53.275153 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:53.275165 | orchestrator | 2026-03-23 00:27:53.275177 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-23 00:27:53.275190 | orchestrator | Monday 23 March 2026 00:27:30 +0000 (0:00:00.205) 0:00:42.644 ********** 2026-03-23 00:27:53.275205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:27:53.275220 | orchestrator | 2026-03-23 00:27:53.275250 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-23 00:27:53.275264 | orchestrator | Monday 23 March 2026 00:27:30 +0000 (0:00:00.275) 0:00:42.920 ********** 2026-03-23 00:27:53.275277 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:53.275289 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:53.275301 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:53.275313 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:53.275326 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:53.275338 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:53.275351 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:53.275363 | orchestrator | 2026-03-23 00:27:53.275374 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-23 00:27:53.275385 | orchestrator | Monday 23 March 2026 00:27:32 +0000 (0:00:01.908) 0:00:44.829 ********** 2026-03-23 00:27:53.275396 | orchestrator | changed: [testbed-manager] 2026-03-23 00:27:53.275407 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:27:53.275418 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:27:53.275429 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:27:53.275440 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:27:53.275451 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:27:53.275467 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:27:53.275478 | orchestrator | 2026-03-23 00:27:53.275490 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-23 00:27:53.275501 | orchestrator | Monday 23 March 2026 00:27:33 +0000 (0:00:01.197) 0:00:46.026 ********** 2026-03-23 00:27:53.275512 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:53.275523 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:53.275534 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:53.275544 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:53.275555 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:53.275566 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:53.275577 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:53.275588 | orchestrator | 2026-03-23 00:27:53.275599 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-23 00:27:53.275610 | orchestrator | Monday 23 March 2026 00:27:34 +0000 (0:00:00.851) 0:00:46.878 ********** 2026-03-23 00:27:53.275622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:27:53.275643 | orchestrator | 2026-03-23 00:27:53.275654 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-23 00:27:53.275666 | orchestrator | Monday 23 March 2026 00:27:34 +0000 (0:00:00.278) 0:00:47.157 ********** 2026-03-23 00:27:53.275677 | orchestrator | changed: [testbed-manager] 2026-03-23 00:27:53.275688 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:27:53.275698 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:27:53.275709 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:27:53.275720 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:27:53.275731 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:27:53.275742 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:27:53.275753 | orchestrator | 2026-03-23 00:27:53.275783 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-23 00:27:53.275795 | orchestrator | Monday 23 March 2026 00:27:35 +0000 (0:00:01.002) 0:00:48.159 ********** 2026-03-23 00:27:53.275806 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:27:53.275817 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:27:53.275828 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:27:53.275839 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:27:53.275850 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:27:53.275860 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:27:53.275871 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:27:53.275882 | orchestrator | 2026-03-23 00:27:53.275893 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-23 00:27:53.275904 | orchestrator | Monday 23 March 2026 00:27:35 +0000 (0:00:00.263) 0:00:48.422 ********** 2026-03-23 00:27:53.275916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:27:53.275927 | orchestrator | 2026-03-23 00:27:53.275938 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-23 00:27:53.275949 | orchestrator | Monday 23 March 2026 00:27:36 +0000 (0:00:00.299) 0:00:48.721 ********** 2026-03-23 00:27:53.275960 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:53.275971 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:53.275982 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:53.275993 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:53.276004 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:53.276014 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:53.276025 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:53.276036 | orchestrator | 2026-03-23 00:27:53.276047 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-23 00:27:53.276058 | orchestrator | Monday 23 March 2026 00:27:38 +0000 (0:00:01.913) 0:00:50.635 ********** 2026-03-23 00:27:53.276086 | orchestrator | changed: [testbed-manager] 2026-03-23 00:27:53.276097 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:27:53.276108 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:27:53.276119 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:27:53.276130 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:27:53.276141 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:27:53.276152 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:27:53.276163 | orchestrator | 2026-03-23 00:27:53.276174 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-23 00:27:53.276185 | orchestrator | Monday 23 March 2026 00:27:39 +0000 (0:00:01.175) 0:00:51.811 ********** 2026-03-23 00:27:53.276197 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:27:53.276207 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:27:53.276218 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:27:53.276229 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:27:53.276240 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:27:53.276251 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:27:53.276269 | orchestrator | changed: [testbed-manager] 2026-03-23 00:27:53.276280 | orchestrator | 2026-03-23 00:27:53.276291 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-23 00:27:53.276303 | orchestrator | Monday 23 March 2026 00:27:50 +0000 (0:00:11.156) 0:01:02.967 ********** 2026-03-23 00:27:53.276314 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:53.276325 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:53.276336 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:53.276347 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:53.276358 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:53.276368 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:53.276379 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:53.276390 | orchestrator | 2026-03-23 00:27:53.276401 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-23 00:27:53.276412 | orchestrator | Monday 23 March 2026 00:27:51 +0000 (0:00:01.217) 0:01:04.185 ********** 2026-03-23 00:27:53.276424 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:53.276434 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:53.276445 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:53.276456 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:53.276467 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:53.276478 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:53.276489 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:53.276500 | orchestrator | 2026-03-23 00:27:53.276517 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-23 00:27:53.276528 | orchestrator | Monday 23 March 2026 00:27:52 +0000 (0:00:00.907) 0:01:05.093 ********** 2026-03-23 00:27:53.276539 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:53.276550 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:53.276561 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:53.276572 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:53.276583 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:53.276594 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:53.276605 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:53.276616 | orchestrator | 2026-03-23 00:27:53.276627 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-23 00:27:53.276638 | orchestrator | Monday 23 March 2026 00:27:52 +0000 (0:00:00.208) 0:01:05.301 ********** 2026-03-23 00:27:53.276649 | orchestrator | ok: [testbed-manager] 2026-03-23 00:27:53.276660 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:27:53.276671 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:27:53.276682 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:27:53.276692 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:27:53.276703 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:27:53.276714 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:27:53.276725 | orchestrator | 2026-03-23 00:27:53.276736 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-23 00:27:53.276747 | orchestrator | Monday 23 March 2026 00:27:52 +0000 (0:00:00.218) 0:01:05.520 ********** 2026-03-23 00:27:53.276759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:27:53.276770 | orchestrator | 2026-03-23 00:27:53.276789 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-23 00:30:01.493876 | orchestrator | Monday 23 March 2026 00:27:53 +0000 (0:00:00.281) 0:01:05.801 ********** 2026-03-23 00:30:01.493972 | orchestrator | ok: [testbed-manager] 2026-03-23 00:30:01.493984 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:30:01.493994 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:30:01.494001 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:30:01.494009 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:30:01.494066 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:30:01.494098 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:30:01.494105 | orchestrator | 2026-03-23 00:30:01.494114 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-23 00:30:01.494143 | orchestrator | Monday 23 March 2026 00:27:55 +0000 (0:00:01.741) 0:01:07.543 ********** 2026-03-23 00:30:01.494173 | orchestrator | changed: [testbed-manager] 2026-03-23 00:30:01.494183 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:30:01.494190 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:30:01.494198 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:30:01.494205 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:30:01.494212 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:30:01.494219 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:30:01.494226 | orchestrator | 2026-03-23 00:30:01.494234 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-23 00:30:01.494242 | orchestrator | Monday 23 March 2026 00:27:55 +0000 (0:00:00.727) 0:01:08.270 ********** 2026-03-23 00:30:01.494250 | orchestrator | ok: [testbed-manager] 2026-03-23 00:30:01.494257 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:30:01.494264 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:30:01.494271 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:30:01.494278 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:30:01.494286 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:30:01.494293 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:30:01.494320 | orchestrator | 2026-03-23 00:30:01.494328 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-23 00:30:01.494335 | orchestrator | Monday 23 March 2026 00:27:55 +0000 (0:00:00.260) 0:01:08.531 ********** 2026-03-23 00:30:01.494343 | orchestrator | ok: [testbed-manager] 2026-03-23 00:30:01.494350 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:30:01.494357 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:30:01.494364 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:30:01.494371 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:30:01.494378 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:30:01.494385 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:30:01.494393 | orchestrator | 2026-03-23 00:30:01.494400 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-23 00:30:01.494407 | orchestrator | Monday 23 March 2026 00:27:57 +0000 (0:00:01.472) 0:01:10.004 ********** 2026-03-23 00:30:01.494416 | orchestrator | changed: [testbed-manager] 2026-03-23 00:30:01.494424 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:30:01.494432 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:30:01.494440 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:30:01.494449 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:30:01.494457 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:30:01.494466 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:30:01.494474 | orchestrator | 2026-03-23 00:30:01.494482 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-23 00:30:01.494491 | orchestrator | Monday 23 March 2026 00:27:59 +0000 (0:00:02.130) 0:01:12.135 ********** 2026-03-23 00:30:01.494499 | orchestrator | ok: [testbed-manager] 2026-03-23 00:30:01.494507 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:30:01.494516 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:30:01.494524 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:30:01.494533 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:30:01.494541 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:30:01.494549 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:30:01.494558 | orchestrator | 2026-03-23 00:30:01.494566 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-23 00:30:01.494575 | orchestrator | Monday 23 March 2026 00:28:03 +0000 (0:00:03.418) 0:01:15.553 ********** 2026-03-23 00:30:01.494584 | orchestrator | ok: [testbed-manager] 2026-03-23 00:30:01.494591 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:30:01.494600 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:30:01.494607 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:30:01.494616 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:30:01.494624 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:30:01.494633 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:30:01.494641 | orchestrator | 2026-03-23 00:30:01.494649 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-23 00:30:01.494677 | orchestrator | Monday 23 March 2026 00:28:37 +0000 (0:00:34.338) 0:01:49.892 ********** 2026-03-23 00:30:01.494760 | orchestrator | changed: [testbed-manager] 2026-03-23 00:30:01.494771 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:30:01.494780 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:30:01.494788 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:30:01.494797 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:30:01.494806 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:30:01.494814 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:30:01.494827 | orchestrator | 2026-03-23 00:30:01.494840 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-23 00:30:01.494852 | orchestrator | Monday 23 March 2026 00:29:48 +0000 (0:01:10.836) 0:03:00.728 ********** 2026-03-23 00:30:01.494864 | orchestrator | ok: [testbed-manager] 2026-03-23 00:30:01.494875 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:30:01.494886 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:30:01.494897 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:30:01.494909 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:30:01.494920 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:30:01.494932 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:30:01.494942 | orchestrator | 2026-03-23 00:30:01.494954 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-23 00:30:01.494967 | orchestrator | Monday 23 March 2026 00:29:50 +0000 (0:00:01.921) 0:03:02.649 ********** 2026-03-23 00:30:01.494979 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:30:01.494989 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:30:01.495000 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:30:01.495011 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:30:01.495022 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:30:01.495034 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:30:01.495045 | orchestrator | changed: [testbed-manager] 2026-03-23 00:30:01.495057 | orchestrator | 2026-03-23 00:30:01.495070 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-23 00:30:01.495081 | orchestrator | Monday 23 March 2026 00:30:00 +0000 (0:00:10.313) 0:03:12.963 ********** 2026-03-23 00:30:01.495128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-23 00:30:01.495230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-23 00:30:01.495250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-23 00:30:01.495265 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-23 00:30:01.495302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-23 00:30:01.495315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-23 00:30:01.495392 | orchestrator | 2026-03-23 00:30:01.495408 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-23 00:30:01.495422 | orchestrator | Monday 23 March 2026 00:30:00 +0000 (0:00:00.368) 0:03:13.331 ********** 2026-03-23 00:30:01.495435 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-23 00:30:01.495447 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:30:01.495500 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-23 00:30:01.495512 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-23 00:30:01.495524 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:30:01.495535 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:30:01.495558 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-23 00:30:01.495571 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:30:01.495583 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-23 00:30:01.495624 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-23 00:30:01.495636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-23 00:30:01.495646 | orchestrator | 2026-03-23 00:30:01.495653 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-23 00:30:01.495661 | orchestrator | Monday 23 March 2026 00:30:01 +0000 (0:00:00.622) 0:03:13.954 ********** 2026-03-23 00:30:01.495668 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-23 00:30:01.495677 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-23 00:30:01.495684 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-23 00:30:01.495692 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-23 00:30:01.495699 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-23 00:30:01.495718 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-23 00:30:08.337229 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-23 00:30:08.337325 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-23 00:30:08.337349 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-23 00:30:08.337381 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-23 00:30:08.337402 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:30:08.337421 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-23 00:30:08.337440 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-23 00:30:08.337459 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-23 00:30:08.337506 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-23 00:30:08.337529 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-23 00:30:08.337548 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-23 00:30:08.337567 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-23 00:30:08.337586 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-23 00:30:08.337601 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-23 00:30:08.337612 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-23 00:30:08.337623 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-23 00:30:08.337634 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-23 00:30:08.337644 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-23 00:30:08.337655 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-23 00:30:08.337666 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-23 00:30:08.337677 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-23 00:30:08.337688 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-23 00:30:08.337699 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:30:08.337710 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-23 00:30:08.337721 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-23 00:30:08.337734 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-23 00:30:08.337754 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:30:08.337773 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-23 00:30:08.337794 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-23 00:30:08.337832 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-23 00:30:08.337847 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-23 00:30:08.337861 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-23 00:30:08.337873 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-23 00:30:08.337892 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-23 00:30:08.337911 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-23 00:30:08.337931 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-23 00:30:08.337950 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-23 00:30:08.337968 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:30:08.337987 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-23 00:30:08.338007 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-23 00:30:08.338079 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-23 00:30:08.338102 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-23 00:30:08.338113 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-23 00:30:08.338143 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-23 00:30:08.338194 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-23 00:30:08.338206 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-23 00:30:08.338217 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-23 00:30:08.338228 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-23 00:30:08.338239 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-23 00:30:08.338250 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-23 00:30:08.338261 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-23 00:30:08.338272 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-23 00:30:08.338283 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-23 00:30:08.338294 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-23 00:30:08.338305 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-23 00:30:08.338316 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-23 00:30:08.338327 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-23 00:30:08.338338 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-23 00:30:08.338349 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-23 00:30:08.338360 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-23 00:30:08.338371 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-23 00:30:08.338382 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-23 00:30:08.338393 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-23 00:30:08.338404 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-23 00:30:08.338414 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-23 00:30:08.338425 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-23 00:30:08.338436 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-23 00:30:08.338447 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-23 00:30:08.338458 | orchestrator | 2026-03-23 00:30:08.338470 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-23 00:30:08.338481 | orchestrator | Monday 23 March 2026 00:30:06 +0000 (0:00:04.707) 0:03:18.661 ********** 2026-03-23 00:30:08.338492 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-23 00:30:08.338503 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-23 00:30:08.338513 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-23 00:30:08.338531 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-23 00:30:08.338550 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-23 00:30:08.338560 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-23 00:30:08.338571 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-23 00:30:08.338582 | orchestrator | 2026-03-23 00:30:08.338598 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-23 00:30:08.338617 | orchestrator | Monday 23 March 2026 00:30:07 +0000 (0:00:01.626) 0:03:20.287 ********** 2026-03-23 00:30:08.338636 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-23 00:30:08.338655 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-23 00:30:08.338674 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:30:08.338694 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-23 00:30:08.338714 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:30:08.338734 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-23 00:30:08.338754 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:30:08.338773 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:30:08.338814 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-23 00:30:08.338833 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-23 00:30:08.338856 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-23 00:30:21.845289 | orchestrator | 2026-03-23 00:30:21.845433 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-23 00:30:21.845452 | orchestrator | Monday 23 March 2026 00:30:08 +0000 (0:00:00.618) 0:03:20.906 ********** 2026-03-23 00:30:21.845464 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-23 00:30:21.845477 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:30:21.845490 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-23 00:30:21.845501 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-23 00:30:21.846327 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:30:21.846360 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:30:21.846380 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-23 00:30:21.846398 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:30:21.846417 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-23 00:30:21.846437 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-23 00:30:21.846457 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-23 00:30:21.846475 | orchestrator | 2026-03-23 00:30:21.846495 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-23 00:30:21.846515 | orchestrator | Monday 23 March 2026 00:30:08 +0000 (0:00:00.495) 0:03:21.401 ********** 2026-03-23 00:30:21.846534 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-23 00:30:21.846553 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:30:21.846566 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-23 00:30:21.846577 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:30:21.846589 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-23 00:30:21.846630 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:30:21.846642 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-23 00:30:21.846653 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:30:21.846665 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-23 00:30:21.846681 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-23 00:30:21.846700 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-23 00:30:21.846719 | orchestrator | 2026-03-23 00:30:21.846737 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-23 00:30:21.846754 | orchestrator | Monday 23 March 2026 00:30:10 +0000 (0:00:01.645) 0:03:23.047 ********** 2026-03-23 00:30:21.846772 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:30:21.846790 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:30:21.846808 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:30:21.846827 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:30:21.846846 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:30:21.846864 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:30:21.846884 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:30:21.846903 | orchestrator | 2026-03-23 00:30:21.846922 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-23 00:30:21.846943 | orchestrator | Monday 23 March 2026 00:30:10 +0000 (0:00:00.287) 0:03:23.335 ********** 2026-03-23 00:30:21.846957 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:30:21.846975 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:30:21.846992 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:30:21.847010 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:30:21.847030 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:30:21.847048 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:30:21.847066 | orchestrator | ok: [testbed-manager] 2026-03-23 00:30:21.847094 | orchestrator | 2026-03-23 00:30:21.847114 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-23 00:30:21.847132 | orchestrator | Monday 23 March 2026 00:30:15 +0000 (0:00:04.916) 0:03:28.251 ********** 2026-03-23 00:30:21.847150 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-23 00:30:21.847228 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-23 00:30:21.847252 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:30:21.847273 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:30:21.847292 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-23 00:30:21.847309 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-23 00:30:21.847320 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:30:21.847331 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:30:21.847342 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-23 00:30:21.847353 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-23 00:30:21.847364 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:30:21.847375 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:30:21.847386 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-23 00:30:21.847397 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:30:21.847408 | orchestrator | 2026-03-23 00:30:21.847419 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-23 00:30:21.847430 | orchestrator | Monday 23 March 2026 00:30:16 +0000 (0:00:00.338) 0:03:28.590 ********** 2026-03-23 00:30:21.847441 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-23 00:30:21.847452 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-23 00:30:21.847463 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-23 00:30:21.847500 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-23 00:30:21.847512 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-23 00:30:21.847523 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-23 00:30:21.847549 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-23 00:30:21.847561 | orchestrator | 2026-03-23 00:30:21.847572 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-23 00:30:21.847583 | orchestrator | Monday 23 March 2026 00:30:17 +0000 (0:00:01.161) 0:03:29.751 ********** 2026-03-23 00:30:21.847596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:30:21.847610 | orchestrator | 2026-03-23 00:30:21.847621 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-23 00:30:21.847632 | orchestrator | Monday 23 March 2026 00:30:17 +0000 (0:00:00.381) 0:03:30.133 ********** 2026-03-23 00:30:21.847643 | orchestrator | ok: [testbed-manager] 2026-03-23 00:30:21.847654 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:30:21.847665 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:30:21.847676 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:30:21.847687 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:30:21.847698 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:30:21.847708 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:30:21.847719 | orchestrator | 2026-03-23 00:30:21.847730 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-23 00:30:21.847741 | orchestrator | Monday 23 March 2026 00:30:19 +0000 (0:00:01.541) 0:03:31.674 ********** 2026-03-23 00:30:21.847752 | orchestrator | ok: [testbed-manager] 2026-03-23 00:30:21.847763 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:30:21.847774 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:30:21.847784 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:30:21.847795 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:30:21.847806 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:30:21.847835 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:30:21.847846 | orchestrator | 2026-03-23 00:30:21.847857 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-23 00:30:21.847868 | orchestrator | Monday 23 March 2026 00:30:19 +0000 (0:00:00.704) 0:03:32.379 ********** 2026-03-23 00:30:21.847885 | orchestrator | changed: [testbed-manager] 2026-03-23 00:30:21.847910 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:30:21.847935 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:30:21.847953 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:30:21.847971 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:30:21.847988 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:30:21.848006 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:30:21.848023 | orchestrator | 2026-03-23 00:30:21.848041 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-23 00:30:21.848059 | orchestrator | Monday 23 March 2026 00:30:20 +0000 (0:00:00.774) 0:03:33.153 ********** 2026-03-23 00:30:21.848077 | orchestrator | ok: [testbed-manager] 2026-03-23 00:30:21.848095 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:30:21.848113 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:30:21.848130 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:30:21.848149 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:30:21.848220 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:30:21.848242 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:30:21.848261 | orchestrator | 2026-03-23 00:30:21.848279 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-23 00:30:21.848296 | orchestrator | Monday 23 March 2026 00:30:21 +0000 (0:00:00.630) 0:03:33.784 ********** 2026-03-23 00:30:21.848330 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774224346.363929, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:21.848386 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774224334.8763676, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:21.848407 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774224359.0692303, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:21.848464 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774224371.1653771, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:27.766473 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774224354.829846, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:27.766681 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774224359.5909693, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:27.766707 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774224354.4054892, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:27.766741 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:27.766785 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:27.766799 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:27.766814 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:27.766902 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:27.766922 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:27.766935 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 00:30:27.766949 | orchestrator | 2026-03-23 00:30:27.766965 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-23 00:30:27.766981 | orchestrator | Monday 23 March 2026 00:30:22 +0000 (0:00:01.009) 0:03:34.793 ********** 2026-03-23 00:30:27.766995 | orchestrator | changed: [testbed-manager] 2026-03-23 00:30:27.767011 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:30:27.767024 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:30:27.767050 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:30:27.767063 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:30:27.767077 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:30:27.767091 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:30:27.767105 | orchestrator | 2026-03-23 00:30:27.767120 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-23 00:30:27.767134 | orchestrator | Monday 23 March 2026 00:30:23 +0000 (0:00:01.246) 0:03:36.040 ********** 2026-03-23 00:30:27.767149 | orchestrator | changed: [testbed-manager] 2026-03-23 00:30:27.767163 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:30:27.767202 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:30:27.767225 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:30:27.767240 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:30:27.767255 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:30:27.767270 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:30:27.767282 | orchestrator | 2026-03-23 00:30:27.767295 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-23 00:30:27.767310 | orchestrator | Monday 23 March 2026 00:30:24 +0000 (0:00:01.315) 0:03:37.355 ********** 2026-03-23 00:30:27.767325 | orchestrator | changed: [testbed-manager] 2026-03-23 00:30:27.767338 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:30:27.767352 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:30:27.767366 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:30:27.767380 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:30:27.767394 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:30:27.767408 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:30:27.767421 | orchestrator | 2026-03-23 00:30:27.767436 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-23 00:30:27.767451 | orchestrator | Monday 23 March 2026 00:30:26 +0000 (0:00:01.323) 0:03:38.679 ********** 2026-03-23 00:30:27.767466 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:30:27.767480 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:30:27.767495 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:30:27.767509 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:30:27.767523 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:30:27.767538 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:30:27.767552 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:30:27.767567 | orchestrator | 2026-03-23 00:30:27.767582 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-23 00:30:27.767596 | orchestrator | Monday 23 March 2026 00:30:26 +0000 (0:00:00.290) 0:03:38.969 ********** 2026-03-23 00:30:27.767610 | orchestrator | ok: [testbed-manager] 2026-03-23 00:30:27.767626 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:30:27.767641 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:30:27.767655 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:30:27.767670 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:30:27.767684 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:30:27.767699 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:30:27.767714 | orchestrator | 2026-03-23 00:30:27.767727 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-23 00:30:27.767742 | orchestrator | Monday 23 March 2026 00:30:27 +0000 (0:00:00.853) 0:03:39.823 ********** 2026-03-23 00:30:27.767758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:30:27.767775 | orchestrator | 2026-03-23 00:30:27.767789 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-23 00:30:27.767819 | orchestrator | Monday 23 March 2026 00:30:27 +0000 (0:00:00.470) 0:03:40.293 ********** 2026-03-23 00:31:46.174266 | orchestrator | ok: [testbed-manager] 2026-03-23 00:31:46.174386 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:31:46.174404 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:31:46.174414 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:31:46.174446 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:31:46.174455 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:31:46.174464 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:31:46.174473 | orchestrator | 2026-03-23 00:31:46.174485 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-23 00:31:46.174499 | orchestrator | Monday 23 March 2026 00:30:37 +0000 (0:00:09.943) 0:03:50.237 ********** 2026-03-23 00:31:46.174514 | orchestrator | ok: [testbed-manager] 2026-03-23 00:31:46.174528 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:31:46.174542 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:31:46.174555 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:31:46.174570 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:31:46.174585 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:31:46.174599 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:31:46.174613 | orchestrator | 2026-03-23 00:31:46.174628 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-23 00:31:46.174644 | orchestrator | Monday 23 March 2026 00:30:39 +0000 (0:00:01.507) 0:03:51.744 ********** 2026-03-23 00:31:46.174659 | orchestrator | ok: [testbed-manager] 2026-03-23 00:31:46.174673 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:31:46.174687 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:31:46.174701 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:31:46.174715 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:31:46.174730 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:31:46.174745 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:31:46.174760 | orchestrator | 2026-03-23 00:31:46.174776 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-23 00:31:46.174788 | orchestrator | Monday 23 March 2026 00:30:40 +0000 (0:00:01.163) 0:03:52.908 ********** 2026-03-23 00:31:46.174799 | orchestrator | ok: [testbed-manager] 2026-03-23 00:31:46.174815 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:31:46.174830 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:31:46.174846 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:31:46.174863 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:31:46.174879 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:31:46.174895 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:31:46.174909 | orchestrator | 2026-03-23 00:31:46.174925 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-23 00:31:46.174943 | orchestrator | Monday 23 March 2026 00:30:40 +0000 (0:00:00.277) 0:03:53.185 ********** 2026-03-23 00:31:46.174957 | orchestrator | ok: [testbed-manager] 2026-03-23 00:31:46.174973 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:31:46.174989 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:31:46.175005 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:31:46.175021 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:31:46.175032 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:31:46.175041 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:31:46.175052 | orchestrator | 2026-03-23 00:31:46.175062 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-23 00:31:46.175072 | orchestrator | Monday 23 March 2026 00:30:40 +0000 (0:00:00.271) 0:03:53.456 ********** 2026-03-23 00:31:46.175083 | orchestrator | ok: [testbed-manager] 2026-03-23 00:31:46.175091 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:31:46.175100 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:31:46.175109 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:31:46.175117 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:31:46.175126 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:31:46.175135 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:31:46.175144 | orchestrator | 2026-03-23 00:31:46.175152 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-23 00:31:46.175161 | orchestrator | Monday 23 March 2026 00:30:41 +0000 (0:00:00.271) 0:03:53.728 ********** 2026-03-23 00:31:46.175170 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:31:46.175179 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:31:46.175188 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:31:46.175238 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:31:46.175249 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:31:46.175258 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:31:46.175267 | orchestrator | ok: [testbed-manager] 2026-03-23 00:31:46.175275 | orchestrator | 2026-03-23 00:31:46.175284 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-23 00:31:46.175293 | orchestrator | Monday 23 March 2026 00:30:45 +0000 (0:00:04.464) 0:03:58.193 ********** 2026-03-23 00:31:46.175304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:31:46.175316 | orchestrator | 2026-03-23 00:31:46.175325 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-23 00:31:46.175334 | orchestrator | Monday 23 March 2026 00:30:46 +0000 (0:00:00.375) 0:03:58.569 ********** 2026-03-23 00:31:46.175343 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-23 00:31:46.175351 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-23 00:31:46.175361 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:31:46.175370 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-23 00:31:46.175378 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-23 00:31:46.175387 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-23 00:31:46.175396 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-23 00:31:46.175404 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:31:46.175413 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-23 00:31:46.175422 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:31:46.175431 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-23 00:31:46.175439 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-23 00:31:46.175448 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-23 00:31:46.175457 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:31:46.175466 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-23 00:31:46.175475 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-23 00:31:46.175503 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:31:46.175512 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:31:46.175521 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-23 00:31:46.175530 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-23 00:31:46.175539 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:31:46.175548 | orchestrator | 2026-03-23 00:31:46.175557 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-23 00:31:46.175565 | orchestrator | Monday 23 March 2026 00:30:46 +0000 (0:00:00.304) 0:03:58.874 ********** 2026-03-23 00:31:46.175575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:31:46.175584 | orchestrator | 2026-03-23 00:31:46.175592 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-23 00:31:46.175601 | orchestrator | Monday 23 March 2026 00:30:46 +0000 (0:00:00.468) 0:03:59.342 ********** 2026-03-23 00:31:46.175610 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-23 00:31:46.175619 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:31:46.175628 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-23 00:31:46.175636 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-23 00:31:46.175662 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:31:46.175672 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-23 00:31:46.175688 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:31:46.175697 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-23 00:31:46.175705 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:31:46.175714 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-23 00:31:46.175723 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:31:46.175731 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:31:46.175742 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-23 00:31:46.175757 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:31:46.175772 | orchestrator | 2026-03-23 00:31:46.175786 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-23 00:31:46.175801 | orchestrator | Monday 23 March 2026 00:30:47 +0000 (0:00:00.268) 0:03:59.611 ********** 2026-03-23 00:31:46.175817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:31:46.175832 | orchestrator | 2026-03-23 00:31:46.175847 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-23 00:31:46.175863 | orchestrator | Monday 23 March 2026 00:30:47 +0000 (0:00:00.354) 0:03:59.966 ********** 2026-03-23 00:31:46.175878 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:31:46.175887 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:31:46.175896 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:31:46.175904 | orchestrator | changed: [testbed-manager] 2026-03-23 00:31:46.175913 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:31:46.175921 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:31:46.175930 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:31:46.175939 | orchestrator | 2026-03-23 00:31:46.175947 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-23 00:31:46.175956 | orchestrator | Monday 23 March 2026 00:31:20 +0000 (0:00:33.116) 0:04:33.083 ********** 2026-03-23 00:31:46.175965 | orchestrator | changed: [testbed-manager] 2026-03-23 00:31:46.175973 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:31:46.175982 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:31:46.175990 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:31:46.175999 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:31:46.176008 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:31:46.176016 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:31:46.176025 | orchestrator | 2026-03-23 00:31:46.176034 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-23 00:31:46.176042 | orchestrator | Monday 23 March 2026 00:31:30 +0000 (0:00:09.646) 0:04:42.729 ********** 2026-03-23 00:31:46.176051 | orchestrator | changed: [testbed-manager] 2026-03-23 00:31:46.176060 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:31:46.176068 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:31:46.176077 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:31:46.176085 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:31:46.176094 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:31:46.176103 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:31:46.176111 | orchestrator | 2026-03-23 00:31:46.176120 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-23 00:31:46.176129 | orchestrator | Monday 23 March 2026 00:31:37 +0000 (0:00:07.799) 0:04:50.528 ********** 2026-03-23 00:31:46.176137 | orchestrator | ok: [testbed-manager] 2026-03-23 00:31:46.176146 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:31:46.176155 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:31:46.176163 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:31:46.176172 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:31:46.176181 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:31:46.176189 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:31:46.176198 | orchestrator | 2026-03-23 00:31:46.176264 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-23 00:31:46.176282 | orchestrator | Monday 23 March 2026 00:31:39 +0000 (0:00:01.844) 0:04:52.373 ********** 2026-03-23 00:31:46.176293 | orchestrator | changed: [testbed-manager] 2026-03-23 00:31:46.176308 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:31:46.176322 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:31:46.176337 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:31:46.176351 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:31:46.176365 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:31:46.176382 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:31:46.176397 | orchestrator | 2026-03-23 00:31:46.176438 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-23 00:31:56.714754 | orchestrator | Monday 23 March 2026 00:31:46 +0000 (0:00:06.326) 0:04:58.700 ********** 2026-03-23 00:31:56.714833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:31:56.714841 | orchestrator | 2026-03-23 00:31:56.714847 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-23 00:31:56.714853 | orchestrator | Monday 23 March 2026 00:31:46 +0000 (0:00:00.340) 0:04:59.041 ********** 2026-03-23 00:31:56.714858 | orchestrator | changed: [testbed-manager] 2026-03-23 00:31:56.714864 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:31:56.714868 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:31:56.714872 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:31:56.714876 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:31:56.714881 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:31:56.714885 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:31:56.714889 | orchestrator | 2026-03-23 00:31:56.714894 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-23 00:31:56.714898 | orchestrator | Monday 23 March 2026 00:31:47 +0000 (0:00:00.679) 0:04:59.721 ********** 2026-03-23 00:31:56.714902 | orchestrator | ok: [testbed-manager] 2026-03-23 00:31:56.714908 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:31:56.714912 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:31:56.714916 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:31:56.714921 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:31:56.714925 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:31:56.714929 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:31:56.714933 | orchestrator | 2026-03-23 00:31:56.714938 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-23 00:31:56.714942 | orchestrator | Monday 23 March 2026 00:31:49 +0000 (0:00:01.829) 0:05:01.551 ********** 2026-03-23 00:31:56.714946 | orchestrator | changed: [testbed-manager] 2026-03-23 00:31:56.714951 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:31:56.714955 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:31:56.714959 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:31:56.714963 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:31:56.714968 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:31:56.714972 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:31:56.714976 | orchestrator | 2026-03-23 00:31:56.714980 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-23 00:31:56.714985 | orchestrator | Monday 23 March 2026 00:31:49 +0000 (0:00:00.670) 0:05:02.221 ********** 2026-03-23 00:31:56.714989 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:31:56.714993 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:31:56.714998 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:31:56.715002 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:31:56.715006 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:31:56.715010 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:31:56.715015 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:31:56.715019 | orchestrator | 2026-03-23 00:31:56.715023 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-23 00:31:56.715039 | orchestrator | Monday 23 March 2026 00:31:49 +0000 (0:00:00.223) 0:05:02.444 ********** 2026-03-23 00:31:56.715060 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:31:56.715064 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:31:56.715069 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:31:56.715073 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:31:56.715077 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:31:56.715081 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:31:56.715085 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:31:56.715089 | orchestrator | 2026-03-23 00:31:56.715093 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-23 00:31:56.715098 | orchestrator | Monday 23 March 2026 00:31:50 +0000 (0:00:00.328) 0:05:02.773 ********** 2026-03-23 00:31:56.715102 | orchestrator | ok: [testbed-manager] 2026-03-23 00:31:56.715106 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:31:56.715111 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:31:56.715115 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:31:56.715119 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:31:56.715123 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:31:56.715127 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:31:56.715131 | orchestrator | 2026-03-23 00:31:56.715135 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-23 00:31:56.715140 | orchestrator | Monday 23 March 2026 00:31:50 +0000 (0:00:00.396) 0:05:03.169 ********** 2026-03-23 00:31:56.715144 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:31:56.715148 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:31:56.715152 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:31:56.715156 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:31:56.715161 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:31:56.715165 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:31:56.715169 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:31:56.715173 | orchestrator | 2026-03-23 00:31:56.715177 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-23 00:31:56.715182 | orchestrator | Monday 23 March 2026 00:31:50 +0000 (0:00:00.229) 0:05:03.399 ********** 2026-03-23 00:31:56.715186 | orchestrator | ok: [testbed-manager] 2026-03-23 00:31:56.715190 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:31:56.715194 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:31:56.715199 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:31:56.715253 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:31:56.715259 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:31:56.715263 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:31:56.715267 | orchestrator | 2026-03-23 00:31:56.715271 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-23 00:31:56.715276 | orchestrator | Monday 23 March 2026 00:31:51 +0000 (0:00:00.276) 0:05:03.676 ********** 2026-03-23 00:31:56.715280 | orchestrator | ok: [testbed-manager] =>  2026-03-23 00:31:56.715284 | orchestrator |  docker_version: 5:27.5.1 2026-03-23 00:31:56.715288 | orchestrator | ok: [testbed-node-0] =>  2026-03-23 00:31:56.715293 | orchestrator |  docker_version: 5:27.5.1 2026-03-23 00:31:56.715297 | orchestrator | ok: [testbed-node-1] =>  2026-03-23 00:31:56.715301 | orchestrator |  docker_version: 5:27.5.1 2026-03-23 00:31:56.715305 | orchestrator | ok: [testbed-node-2] =>  2026-03-23 00:31:56.715309 | orchestrator |  docker_version: 5:27.5.1 2026-03-23 00:31:56.715323 | orchestrator | ok: [testbed-node-3] =>  2026-03-23 00:31:56.715329 | orchestrator |  docker_version: 5:27.5.1 2026-03-23 00:31:56.715334 | orchestrator | ok: [testbed-node-4] =>  2026-03-23 00:31:56.715338 | orchestrator |  docker_version: 5:27.5.1 2026-03-23 00:31:56.715343 | orchestrator | ok: [testbed-node-5] =>  2026-03-23 00:31:56.715348 | orchestrator |  docker_version: 5:27.5.1 2026-03-23 00:31:56.715352 | orchestrator | 2026-03-23 00:31:56.715357 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-23 00:31:56.715362 | orchestrator | Monday 23 March 2026 00:31:51 +0000 (0:00:00.249) 0:05:03.925 ********** 2026-03-23 00:31:56.715366 | orchestrator | ok: [testbed-manager] =>  2026-03-23 00:31:56.715376 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-23 00:31:56.715380 | orchestrator | ok: [testbed-node-0] =>  2026-03-23 00:31:56.715385 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-23 00:31:56.715390 | orchestrator | ok: [testbed-node-1] =>  2026-03-23 00:31:56.715395 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-23 00:31:56.715399 | orchestrator | ok: [testbed-node-2] =>  2026-03-23 00:31:56.715404 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-23 00:31:56.715409 | orchestrator | ok: [testbed-node-3] =>  2026-03-23 00:31:56.715413 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-23 00:31:56.715417 | orchestrator | ok: [testbed-node-4] =>  2026-03-23 00:31:56.715421 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-23 00:31:56.715425 | orchestrator | ok: [testbed-node-5] =>  2026-03-23 00:31:56.715429 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-23 00:31:56.715434 | orchestrator | 2026-03-23 00:31:56.715438 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-23 00:31:56.715442 | orchestrator | Monday 23 March 2026 00:31:51 +0000 (0:00:00.272) 0:05:04.198 ********** 2026-03-23 00:31:56.715446 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:31:56.715450 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:31:56.715455 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:31:56.715459 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:31:56.715463 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:31:56.715467 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:31:56.715471 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:31:56.715475 | orchestrator | 2026-03-23 00:31:56.715480 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-23 00:31:56.715484 | orchestrator | Monday 23 March 2026 00:31:51 +0000 (0:00:00.255) 0:05:04.454 ********** 2026-03-23 00:31:56.715488 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:31:56.715492 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:31:56.715496 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:31:56.715500 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:31:56.715504 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:31:56.715508 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:31:56.715512 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:31:56.715517 | orchestrator | 2026-03-23 00:31:56.715521 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-23 00:31:56.715525 | orchestrator | Monday 23 March 2026 00:31:52 +0000 (0:00:00.263) 0:05:04.718 ********** 2026-03-23 00:31:56.715534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:31:56.715541 | orchestrator | 2026-03-23 00:31:56.715545 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-23 00:31:56.715549 | orchestrator | Monday 23 March 2026 00:31:52 +0000 (0:00:00.364) 0:05:05.083 ********** 2026-03-23 00:31:56.715553 | orchestrator | ok: [testbed-manager] 2026-03-23 00:31:56.715557 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:31:56.715562 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:31:56.715566 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:31:56.715570 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:31:56.715574 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:31:56.715578 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:31:56.715582 | orchestrator | 2026-03-23 00:31:56.715586 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-23 00:31:56.715591 | orchestrator | Monday 23 March 2026 00:31:53 +0000 (0:00:00.960) 0:05:06.043 ********** 2026-03-23 00:31:56.715595 | orchestrator | ok: [testbed-manager] 2026-03-23 00:31:56.715599 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:31:56.715603 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:31:56.715607 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:31:56.715611 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:31:56.715619 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:31:56.715623 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:31:56.715627 | orchestrator | 2026-03-23 00:31:56.715631 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-23 00:31:56.715637 | orchestrator | Monday 23 March 2026 00:31:56 +0000 (0:00:02.858) 0:05:08.901 ********** 2026-03-23 00:31:56.715641 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-23 00:31:56.715646 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-23 00:31:56.715650 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-23 00:31:56.715654 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-23 00:31:56.715659 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-23 00:31:56.715663 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-23 00:31:56.715667 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:31:56.715671 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-23 00:31:56.715675 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-23 00:31:56.715679 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-23 00:31:56.715684 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:31:56.715688 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-23 00:31:56.715692 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-23 00:31:56.715696 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-23 00:31:56.715700 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:31:56.715704 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-23 00:31:56.715711 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-23 00:33:01.080749 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-23 00:33:01.080868 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:33:01.080886 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-23 00:33:01.080898 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-23 00:33:01.080909 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-23 00:33:01.080921 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:33:01.080932 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:33:01.080944 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-23 00:33:01.080955 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-23 00:33:01.080966 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-23 00:33:01.080977 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:33:01.080989 | orchestrator | 2026-03-23 00:33:01.081002 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-23 00:33:01.081015 | orchestrator | Monday 23 March 2026 00:31:56 +0000 (0:00:00.565) 0:05:09.467 ********** 2026-03-23 00:33:01.081026 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:01.081038 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:01.081049 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:01.081060 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:01.081071 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:01.081082 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:01.081093 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:01.081104 | orchestrator | 2026-03-23 00:33:01.081115 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-23 00:33:01.081126 | orchestrator | Monday 23 March 2026 00:32:04 +0000 (0:00:07.280) 0:05:16.748 ********** 2026-03-23 00:33:01.081138 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:01.081149 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:01.081160 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:01.081171 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:01.081469 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:01.081489 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:01.081543 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:01.081565 | orchestrator | 2026-03-23 00:33:01.081612 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-23 00:33:01.081634 | orchestrator | Monday 23 March 2026 00:32:05 +0000 (0:00:01.064) 0:05:17.812 ********** 2026-03-23 00:33:01.081652 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:01.081669 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:01.081689 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:01.081707 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:01.081726 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:01.081746 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:01.081763 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:01.081781 | orchestrator | 2026-03-23 00:33:01.081799 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-23 00:33:01.081817 | orchestrator | Monday 23 March 2026 00:32:14 +0000 (0:00:09.513) 0:05:27.326 ********** 2026-03-23 00:33:01.081837 | orchestrator | changed: [testbed-manager] 2026-03-23 00:33:01.081856 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:01.081894 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:01.081916 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:01.081935 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:01.081953 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:01.081972 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:01.081991 | orchestrator | 2026-03-23 00:33:01.082010 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-23 00:33:01.082112 | orchestrator | Monday 23 March 2026 00:32:18 +0000 (0:00:03.272) 0:05:30.598 ********** 2026-03-23 00:33:01.082134 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:01.082154 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:01.082173 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:01.082216 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:01.082245 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:01.082256 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:01.082267 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:01.082278 | orchestrator | 2026-03-23 00:33:01.082289 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-23 00:33:01.082300 | orchestrator | Monday 23 March 2026 00:32:19 +0000 (0:00:01.424) 0:05:32.023 ********** 2026-03-23 00:33:01.082311 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:01.082322 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:01.082333 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:01.082343 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:01.082354 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:01.082365 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:01.082376 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:01.082387 | orchestrator | 2026-03-23 00:33:01.082398 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-23 00:33:01.082412 | orchestrator | Monday 23 March 2026 00:32:20 +0000 (0:00:01.281) 0:05:33.304 ********** 2026-03-23 00:33:01.082431 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:33:01.082450 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:33:01.082471 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:33:01.082491 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:33:01.082530 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:33:01.082549 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:33:01.082569 | orchestrator | changed: [testbed-manager] 2026-03-23 00:33:01.082587 | orchestrator | 2026-03-23 00:33:01.082605 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-23 00:33:01.082619 | orchestrator | Monday 23 March 2026 00:32:21 +0000 (0:00:00.538) 0:05:33.843 ********** 2026-03-23 00:33:01.082639 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:01.082657 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:01.082675 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:01.082717 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:01.082738 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:01.082754 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:01.082769 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:01.082785 | orchestrator | 2026-03-23 00:33:01.082802 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-23 00:33:01.082850 | orchestrator | Monday 23 March 2026 00:32:31 +0000 (0:00:10.669) 0:05:44.513 ********** 2026-03-23 00:33:01.082871 | orchestrator | changed: [testbed-manager] 2026-03-23 00:33:01.082891 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:01.082909 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:01.082928 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:01.082946 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:01.082966 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:01.082984 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:01.083004 | orchestrator | 2026-03-23 00:33:01.083016 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-23 00:33:01.083027 | orchestrator | Monday 23 March 2026 00:32:33 +0000 (0:00:01.147) 0:05:45.660 ********** 2026-03-23 00:33:01.083038 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:01.083049 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:01.083060 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:01.083070 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:01.083081 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:01.083092 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:01.083103 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:01.083114 | orchestrator | 2026-03-23 00:33:01.083125 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-23 00:33:01.083136 | orchestrator | Monday 23 March 2026 00:32:42 +0000 (0:00:09.497) 0:05:55.157 ********** 2026-03-23 00:33:01.083147 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:01.083158 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:01.083168 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:01.083212 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:01.083224 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:01.083235 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:01.083246 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:01.083256 | orchestrator | 2026-03-23 00:33:01.083267 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-23 00:33:01.083295 | orchestrator | Monday 23 March 2026 00:32:54 +0000 (0:00:11.540) 0:06:06.698 ********** 2026-03-23 00:33:01.083307 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-23 00:33:01.083318 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-23 00:33:01.083329 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-23 00:33:01.083340 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-23 00:33:01.083352 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-23 00:33:01.083363 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-23 00:33:01.083374 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-23 00:33:01.083385 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-23 00:33:01.083396 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-23 00:33:01.083407 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-23 00:33:01.083418 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-23 00:33:01.083429 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-23 00:33:01.083440 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-23 00:33:01.083451 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-23 00:33:01.083462 | orchestrator | 2026-03-23 00:33:01.083473 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-23 00:33:01.083484 | orchestrator | Monday 23 March 2026 00:32:55 +0000 (0:00:01.217) 0:06:07.915 ********** 2026-03-23 00:33:01.083507 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:33:01.083518 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:33:01.083529 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:33:01.083540 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:33:01.083551 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:33:01.083562 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:33:01.083573 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:33:01.083584 | orchestrator | 2026-03-23 00:33:01.083595 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-23 00:33:01.083606 | orchestrator | Monday 23 March 2026 00:32:56 +0000 (0:00:00.642) 0:06:08.558 ********** 2026-03-23 00:33:01.083617 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:01.083628 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:01.083639 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:01.083650 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:01.083661 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:01.083672 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:01.083683 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:01.083694 | orchestrator | 2026-03-23 00:33:01.083708 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-23 00:33:01.083730 | orchestrator | Monday 23 March 2026 00:33:00 +0000 (0:00:04.251) 0:06:12.810 ********** 2026-03-23 00:33:01.083749 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:33:01.083767 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:33:01.083783 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:33:01.083801 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:33:01.083818 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:33:01.083835 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:33:01.083853 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:33:01.083872 | orchestrator | 2026-03-23 00:33:01.083945 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-23 00:33:01.083968 | orchestrator | Monday 23 March 2026 00:33:00 +0000 (0:00:00.496) 0:06:13.306 ********** 2026-03-23 00:33:01.083987 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-23 00:33:01.084008 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-23 00:33:01.084027 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:33:01.084046 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-23 00:33:01.084063 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-23 00:33:01.084074 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:33:01.084085 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-23 00:33:01.084096 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-23 00:33:01.084107 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:33:01.084132 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-23 00:33:20.548417 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-23 00:33:20.548512 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:33:20.548524 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-23 00:33:20.548531 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-23 00:33:20.548537 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:33:20.548543 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-23 00:33:20.548549 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-23 00:33:20.548556 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:33:20.548562 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-23 00:33:20.548568 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-23 00:33:20.548574 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:33:20.548580 | orchestrator | 2026-03-23 00:33:20.548588 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-23 00:33:20.548619 | orchestrator | Monday 23 March 2026 00:33:01 +0000 (0:00:00.584) 0:06:13.891 ********** 2026-03-23 00:33:20.548625 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:33:20.548631 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:33:20.548637 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:33:20.548643 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:33:20.548649 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:33:20.548655 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:33:20.548661 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:33:20.548666 | orchestrator | 2026-03-23 00:33:20.548672 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-23 00:33:20.548678 | orchestrator | Monday 23 March 2026 00:33:01 +0000 (0:00:00.479) 0:06:14.370 ********** 2026-03-23 00:33:20.548684 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:33:20.548690 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:33:20.548696 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:33:20.548702 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:33:20.548708 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:33:20.548714 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:33:20.548720 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:33:20.548726 | orchestrator | 2026-03-23 00:33:20.548733 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-23 00:33:20.548740 | orchestrator | Monday 23 March 2026 00:33:02 +0000 (0:00:00.608) 0:06:14.979 ********** 2026-03-23 00:33:20.548747 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:33:20.548753 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:33:20.548759 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:33:20.548767 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:33:20.548775 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:33:20.548783 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:33:20.548790 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:33:20.548798 | orchestrator | 2026-03-23 00:33:20.548806 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-23 00:33:20.548827 | orchestrator | Monday 23 March 2026 00:33:02 +0000 (0:00:00.510) 0:06:15.490 ********** 2026-03-23 00:33:20.548835 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:20.548843 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:33:20.548851 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:33:20.548859 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:33:20.548867 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:33:20.548876 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:33:20.548884 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:33:20.548892 | orchestrator | 2026-03-23 00:33:20.548900 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-23 00:33:20.548909 | orchestrator | Monday 23 March 2026 00:33:04 +0000 (0:00:01.865) 0:06:17.355 ********** 2026-03-23 00:33:20.548919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:33:20.548928 | orchestrator | 2026-03-23 00:33:20.548936 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-23 00:33:20.548945 | orchestrator | Monday 23 March 2026 00:33:05 +0000 (0:00:00.845) 0:06:18.200 ********** 2026-03-23 00:33:20.548953 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:20.548961 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:20.548969 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:20.548977 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:20.548986 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:20.548994 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:20.549003 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:20.549011 | orchestrator | 2026-03-23 00:33:20.549020 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-23 00:33:20.549037 | orchestrator | Monday 23 March 2026 00:33:06 +0000 (0:00:01.039) 0:06:19.240 ********** 2026-03-23 00:33:20.549046 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:20.549053 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:20.549061 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:20.549070 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:20.549078 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:20.549086 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:20.549094 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:20.549103 | orchestrator | 2026-03-23 00:33:20.549111 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-23 00:33:20.549119 | orchestrator | Monday 23 March 2026 00:33:07 +0000 (0:00:00.873) 0:06:20.113 ********** 2026-03-23 00:33:20.549128 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:20.549135 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:20.549144 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:20.549152 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:20.549187 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:20.549194 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:20.549200 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:20.549206 | orchestrator | 2026-03-23 00:33:20.549213 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-23 00:33:20.549239 | orchestrator | Monday 23 March 2026 00:33:08 +0000 (0:00:01.302) 0:06:21.416 ********** 2026-03-23 00:33:20.549248 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:33:20.549256 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:33:20.549265 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:33:20.549273 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:33:20.549281 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:33:20.549289 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:33:20.549297 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:33:20.549305 | orchestrator | 2026-03-23 00:33:20.549313 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-23 00:33:20.549322 | orchestrator | Monday 23 March 2026 00:33:10 +0000 (0:00:01.460) 0:06:22.876 ********** 2026-03-23 00:33:20.549329 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:20.549338 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:20.549346 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:20.549354 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:20.549363 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:20.549371 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:20.549380 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:20.549388 | orchestrator | 2026-03-23 00:33:20.549395 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-23 00:33:20.549401 | orchestrator | Monday 23 March 2026 00:33:11 +0000 (0:00:01.388) 0:06:24.265 ********** 2026-03-23 00:33:20.549407 | orchestrator | changed: [testbed-manager] 2026-03-23 00:33:20.549413 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:20.549418 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:20.549424 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:20.549430 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:20.549436 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:20.549442 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:20.549448 | orchestrator | 2026-03-23 00:33:20.549455 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-23 00:33:20.549461 | orchestrator | Monday 23 March 2026 00:33:13 +0000 (0:00:01.634) 0:06:25.900 ********** 2026-03-23 00:33:20.549467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:33:20.549475 | orchestrator | 2026-03-23 00:33:20.549481 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-23 00:33:20.549487 | orchestrator | Monday 23 March 2026 00:33:14 +0000 (0:00:00.862) 0:06:26.762 ********** 2026-03-23 00:33:20.549507 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:20.549513 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:33:20.549519 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:33:20.549525 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:33:20.549531 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:33:20.549537 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:33:20.549543 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:33:20.549549 | orchestrator | 2026-03-23 00:33:20.549555 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-23 00:33:20.549561 | orchestrator | Monday 23 March 2026 00:33:15 +0000 (0:00:01.477) 0:06:28.239 ********** 2026-03-23 00:33:20.549568 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:20.549574 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:33:20.549580 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:33:20.549586 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:33:20.549592 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:33:20.549598 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:33:20.549604 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:33:20.549610 | orchestrator | 2026-03-23 00:33:20.549616 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-23 00:33:20.549622 | orchestrator | Monday 23 March 2026 00:33:17 +0000 (0:00:01.317) 0:06:29.557 ********** 2026-03-23 00:33:20.549628 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:20.549635 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:33:20.549641 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:33:20.549646 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:33:20.549652 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:33:20.549658 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:33:20.549664 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:33:20.549669 | orchestrator | 2026-03-23 00:33:20.549675 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-23 00:33:20.549681 | orchestrator | Monday 23 March 2026 00:33:18 +0000 (0:00:01.122) 0:06:30.680 ********** 2026-03-23 00:33:20.549687 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:20.549693 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:33:20.549699 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:33:20.549705 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:33:20.549711 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:33:20.549717 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:33:20.549723 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:33:20.549729 | orchestrator | 2026-03-23 00:33:20.549735 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-23 00:33:20.549741 | orchestrator | Monday 23 March 2026 00:33:19 +0000 (0:00:01.233) 0:06:31.913 ********** 2026-03-23 00:33:20.549748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:33:20.549755 | orchestrator | 2026-03-23 00:33:20.549760 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-23 00:33:20.549766 | orchestrator | Monday 23 March 2026 00:33:20 +0000 (0:00:00.870) 0:06:32.783 ********** 2026-03-23 00:33:20.549772 | orchestrator | 2026-03-23 00:33:20.549778 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-23 00:33:20.549784 | orchestrator | Monday 23 March 2026 00:33:20 +0000 (0:00:00.055) 0:06:32.838 ********** 2026-03-23 00:33:20.549790 | orchestrator | 2026-03-23 00:33:20.549796 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-23 00:33:20.549802 | orchestrator | Monday 23 March 2026 00:33:20 +0000 (0:00:00.194) 0:06:33.033 ********** 2026-03-23 00:33:20.549808 | orchestrator | 2026-03-23 00:33:20.549813 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-23 00:33:20.549830 | orchestrator | Monday 23 March 2026 00:33:20 +0000 (0:00:00.040) 0:06:33.073 ********** 2026-03-23 00:33:48.283490 | orchestrator | 2026-03-23 00:33:48.283628 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-23 00:33:48.283686 | orchestrator | Monday 23 March 2026 00:33:20 +0000 (0:00:00.041) 0:06:33.115 ********** 2026-03-23 00:33:48.283700 | orchestrator | 2026-03-23 00:33:48.283711 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-23 00:33:48.283722 | orchestrator | Monday 23 March 2026 00:33:20 +0000 (0:00:00.060) 0:06:33.176 ********** 2026-03-23 00:33:48.283733 | orchestrator | 2026-03-23 00:33:48.283745 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-23 00:33:48.283756 | orchestrator | Monday 23 March 2026 00:33:20 +0000 (0:00:00.039) 0:06:33.216 ********** 2026-03-23 00:33:48.283767 | orchestrator | 2026-03-23 00:33:48.283778 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-23 00:33:48.283789 | orchestrator | Monday 23 March 2026 00:33:20 +0000 (0:00:00.039) 0:06:33.255 ********** 2026-03-23 00:33:48.283800 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:33:48.283813 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:33:48.283824 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:33:48.283835 | orchestrator | 2026-03-23 00:33:48.283846 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-23 00:33:48.283857 | orchestrator | Monday 23 March 2026 00:33:22 +0000 (0:00:01.291) 0:06:34.547 ********** 2026-03-23 00:33:48.283868 | orchestrator | changed: [testbed-manager] 2026-03-23 00:33:48.283881 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:48.283892 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:48.283903 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:48.283914 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:48.283924 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:48.283936 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:48.283947 | orchestrator | 2026-03-23 00:33:48.283958 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-23 00:33:48.283969 | orchestrator | Monday 23 March 2026 00:33:23 +0000 (0:00:01.394) 0:06:35.941 ********** 2026-03-23 00:33:48.283980 | orchestrator | changed: [testbed-manager] 2026-03-23 00:33:48.283991 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:48.284002 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:48.284013 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:48.284025 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:48.284038 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:48.284050 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:48.284062 | orchestrator | 2026-03-23 00:33:48.284075 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-23 00:33:48.284088 | orchestrator | Monday 23 March 2026 00:33:24 +0000 (0:00:01.228) 0:06:37.170 ********** 2026-03-23 00:33:48.284100 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:33:48.284113 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:48.284125 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:48.284174 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:48.284198 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:48.284225 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:48.284244 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:48.284263 | orchestrator | 2026-03-23 00:33:48.284303 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-23 00:33:48.284324 | orchestrator | Monday 23 March 2026 00:33:27 +0000 (0:00:02.585) 0:06:39.755 ********** 2026-03-23 00:33:48.284345 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:33:48.284366 | orchestrator | 2026-03-23 00:33:48.284387 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-23 00:33:48.284408 | orchestrator | Monday 23 March 2026 00:33:27 +0000 (0:00:00.094) 0:06:39.850 ********** 2026-03-23 00:33:48.284421 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:48.284432 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:48.284443 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:48.284454 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:33:48.284478 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:48.284489 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:48.284500 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:48.284511 | orchestrator | 2026-03-23 00:33:48.284528 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-23 00:33:48.284556 | orchestrator | Monday 23 March 2026 00:33:28 +0000 (0:00:01.328) 0:06:41.179 ********** 2026-03-23 00:33:48.284577 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:33:48.284594 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:33:48.284613 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:33:48.284632 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:33:48.284650 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:33:48.284665 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:33:48.284677 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:33:48.284687 | orchestrator | 2026-03-23 00:33:48.284698 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-23 00:33:48.284709 | orchestrator | Monday 23 March 2026 00:33:29 +0000 (0:00:00.518) 0:06:41.697 ********** 2026-03-23 00:33:48.284722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:33:48.284735 | orchestrator | 2026-03-23 00:33:48.284746 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-23 00:33:48.284758 | orchestrator | Monday 23 March 2026 00:33:29 +0000 (0:00:00.835) 0:06:42.532 ********** 2026-03-23 00:33:48.284768 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:48.284779 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:33:48.284790 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:33:48.284801 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:33:48.284812 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:33:48.284823 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:33:48.284834 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:33:48.284844 | orchestrator | 2026-03-23 00:33:48.284855 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-23 00:33:48.284866 | orchestrator | Monday 23 March 2026 00:33:31 +0000 (0:00:01.107) 0:06:43.640 ********** 2026-03-23 00:33:48.284877 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-23 00:33:48.284907 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-23 00:33:48.284919 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-23 00:33:48.284930 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-23 00:33:48.284941 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-23 00:33:48.284952 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-23 00:33:48.284963 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-23 00:33:48.284974 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-23 00:33:48.284986 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-23 00:33:48.284997 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-23 00:33:48.285007 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-23 00:33:48.285018 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-23 00:33:48.285029 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-23 00:33:48.285040 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-23 00:33:48.285051 | orchestrator | 2026-03-23 00:33:48.285063 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-23 00:33:48.285074 | orchestrator | Monday 23 March 2026 00:33:33 +0000 (0:00:02.572) 0:06:46.212 ********** 2026-03-23 00:33:48.285085 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:33:48.285096 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:33:48.285107 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:33:48.285129 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:33:48.285217 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:33:48.285229 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:33:48.285240 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:33:48.285251 | orchestrator | 2026-03-23 00:33:48.285262 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-23 00:33:48.285273 | orchestrator | Monday 23 March 2026 00:33:34 +0000 (0:00:00.465) 0:06:46.678 ********** 2026-03-23 00:33:48.285286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:33:48.285299 | orchestrator | 2026-03-23 00:33:48.285310 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-23 00:33:48.285321 | orchestrator | Monday 23 March 2026 00:33:35 +0000 (0:00:00.935) 0:06:47.613 ********** 2026-03-23 00:33:48.285332 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:48.285343 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:33:48.285354 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:33:48.285365 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:33:48.285376 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:33:48.285387 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:33:48.285398 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:33:48.285408 | orchestrator | 2026-03-23 00:33:48.285429 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-23 00:33:48.285440 | orchestrator | Monday 23 March 2026 00:33:35 +0000 (0:00:00.830) 0:06:48.444 ********** 2026-03-23 00:33:48.285451 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:48.285462 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:33:48.285473 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:33:48.285484 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:33:48.285495 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:33:48.285506 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:33:48.285519 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:33:48.285537 | orchestrator | 2026-03-23 00:33:48.285557 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-23 00:33:48.285576 | orchestrator | Monday 23 March 2026 00:33:36 +0000 (0:00:00.796) 0:06:49.240 ********** 2026-03-23 00:33:48.285593 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:33:48.285612 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:33:48.285632 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:33:48.285650 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:33:48.285669 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:33:48.285681 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:33:48.285691 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:33:48.285702 | orchestrator | 2026-03-23 00:33:48.285713 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-23 00:33:48.285724 | orchestrator | Monday 23 March 2026 00:33:37 +0000 (0:00:00.525) 0:06:49.766 ********** 2026-03-23 00:33:48.285735 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:48.285746 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:33:48.285757 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:33:48.285768 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:33:48.285778 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:33:48.285789 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:33:48.285800 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:33:48.285811 | orchestrator | 2026-03-23 00:33:48.285821 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-23 00:33:48.285832 | orchestrator | Monday 23 March 2026 00:33:39 +0000 (0:00:01.944) 0:06:51.711 ********** 2026-03-23 00:33:48.285843 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:33:48.285854 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:33:48.285865 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:33:48.285876 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:33:48.285887 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:33:48.285907 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:33:48.285918 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:33:48.285929 | orchestrator | 2026-03-23 00:33:48.285940 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-23 00:33:48.285951 | orchestrator | Monday 23 March 2026 00:33:39 +0000 (0:00:00.628) 0:06:52.339 ********** 2026-03-23 00:33:48.285961 | orchestrator | ok: [testbed-manager] 2026-03-23 00:33:48.285972 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:33:48.285983 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:33:48.285994 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:33:48.286005 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:33:48.286081 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:33:48.286103 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:21.514955 | orchestrator | 2026-03-23 00:34:21.515056 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-23 00:34:21.515066 | orchestrator | Monday 23 March 2026 00:33:48 +0000 (0:00:08.534) 0:07:00.874 ********** 2026-03-23 00:34:21.515071 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:21.515076 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:21.515082 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:21.515086 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:21.515149 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:21.515154 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:21.515158 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:21.515162 | orchestrator | 2026-03-23 00:34:21.515166 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-23 00:34:21.515171 | orchestrator | Monday 23 March 2026 00:33:49 +0000 (0:00:01.324) 0:07:02.198 ********** 2026-03-23 00:34:21.515175 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:21.515179 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:21.515183 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:21.515187 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:21.515191 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:21.515195 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:21.515199 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:21.515203 | orchestrator | 2026-03-23 00:34:21.515206 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-23 00:34:21.515210 | orchestrator | Monday 23 March 2026 00:33:51 +0000 (0:00:01.754) 0:07:03.952 ********** 2026-03-23 00:34:21.515214 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:21.515218 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:21.515222 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:21.515226 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:21.515229 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:21.515233 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:21.515237 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:21.515241 | orchestrator | 2026-03-23 00:34:21.515245 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-23 00:34:21.515248 | orchestrator | Monday 23 March 2026 00:33:53 +0000 (0:00:01.839) 0:07:05.792 ********** 2026-03-23 00:34:21.515252 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:21.515256 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:34:21.515260 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:34:21.515264 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:34:21.515268 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:34:21.515271 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:34:21.515275 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:34:21.515279 | orchestrator | 2026-03-23 00:34:21.515283 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-23 00:34:21.515289 | orchestrator | Monday 23 March 2026 00:33:54 +0000 (0:00:00.854) 0:07:06.646 ********** 2026-03-23 00:34:21.515295 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:34:21.515305 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:34:21.515311 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:34:21.515340 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:34:21.515346 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:34:21.515352 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:34:21.515358 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:34:21.515363 | orchestrator | 2026-03-23 00:34:21.515369 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-23 00:34:21.515375 | orchestrator | Monday 23 March 2026 00:33:54 +0000 (0:00:00.805) 0:07:07.452 ********** 2026-03-23 00:34:21.515380 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:34:21.515386 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:34:21.515392 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:34:21.515398 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:34:21.515404 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:34:21.515410 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:34:21.515415 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:34:21.515420 | orchestrator | 2026-03-23 00:34:21.515426 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-23 00:34:21.515431 | orchestrator | Monday 23 March 2026 00:33:55 +0000 (0:00:00.679) 0:07:08.132 ********** 2026-03-23 00:34:21.515437 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:21.515443 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:34:21.515449 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:34:21.515455 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:34:21.515461 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:34:21.515466 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:34:21.515471 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:34:21.515476 | orchestrator | 2026-03-23 00:34:21.515481 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-23 00:34:21.515487 | orchestrator | Monday 23 March 2026 00:33:56 +0000 (0:00:00.517) 0:07:08.649 ********** 2026-03-23 00:34:21.515493 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:21.515499 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:34:21.515505 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:34:21.515510 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:34:21.515515 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:34:21.515521 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:34:21.515527 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:34:21.515533 | orchestrator | 2026-03-23 00:34:21.515539 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-23 00:34:21.515544 | orchestrator | Monday 23 March 2026 00:33:56 +0000 (0:00:00.500) 0:07:09.150 ********** 2026-03-23 00:34:21.515551 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:21.515556 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:34:21.515563 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:34:21.515568 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:34:21.515574 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:34:21.515580 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:34:21.515586 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:34:21.515592 | orchestrator | 2026-03-23 00:34:21.515598 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-23 00:34:21.515605 | orchestrator | Monday 23 March 2026 00:33:57 +0000 (0:00:00.515) 0:07:09.666 ********** 2026-03-23 00:34:21.515611 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:21.515617 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:34:21.515623 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:34:21.515629 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:34:21.515635 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:34:21.515641 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:34:21.515665 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:34:21.515671 | orchestrator | 2026-03-23 00:34:21.515693 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-23 00:34:21.515701 | orchestrator | Monday 23 March 2026 00:34:02 +0000 (0:00:05.299) 0:07:14.965 ********** 2026-03-23 00:34:21.515707 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:34:21.515713 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:34:21.515728 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:34:21.515734 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:34:21.515740 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:34:21.515747 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:34:21.515753 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:34:21.515759 | orchestrator | 2026-03-23 00:34:21.515766 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-23 00:34:21.515773 | orchestrator | Monday 23 March 2026 00:34:03 +0000 (0:00:00.670) 0:07:15.635 ********** 2026-03-23 00:34:21.515783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:34:21.515791 | orchestrator | 2026-03-23 00:34:21.515795 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-23 00:34:21.515800 | orchestrator | Monday 23 March 2026 00:34:03 +0000 (0:00:00.787) 0:07:16.423 ********** 2026-03-23 00:34:21.515805 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:21.515809 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:34:21.515822 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:34:21.515826 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:34:21.515832 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:34:21.515839 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:34:21.515848 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:34:21.515856 | orchestrator | 2026-03-23 00:34:21.515862 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-23 00:34:21.515869 | orchestrator | Monday 23 March 2026 00:34:05 +0000 (0:00:02.083) 0:07:18.506 ********** 2026-03-23 00:34:21.515876 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:21.515883 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:34:21.515889 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:34:21.515897 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:34:21.515902 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:34:21.515906 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:34:21.515910 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:34:21.515914 | orchestrator | 2026-03-23 00:34:21.515918 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-23 00:34:21.515922 | orchestrator | Monday 23 March 2026 00:34:07 +0000 (0:00:01.318) 0:07:19.824 ********** 2026-03-23 00:34:21.515925 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:21.515930 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:34:21.515933 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:34:21.515937 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:34:21.515941 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:34:21.515945 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:34:21.515948 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:34:21.515952 | orchestrator | 2026-03-23 00:34:21.515956 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-23 00:34:21.515964 | orchestrator | Monday 23 March 2026 00:34:08 +0000 (0:00:00.926) 0:07:20.751 ********** 2026-03-23 00:34:21.515969 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-23 00:34:21.515975 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-23 00:34:21.515978 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-23 00:34:21.515982 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-23 00:34:21.515986 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-23 00:34:21.515990 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-23 00:34:21.515998 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-23 00:34:21.516002 | orchestrator | 2026-03-23 00:34:21.516006 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-23 00:34:21.516009 | orchestrator | Monday 23 March 2026 00:34:09 +0000 (0:00:01.729) 0:07:22.480 ********** 2026-03-23 00:34:21.516013 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:34:21.516017 | orchestrator | 2026-03-23 00:34:21.516021 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-23 00:34:21.516025 | orchestrator | Monday 23 March 2026 00:34:10 +0000 (0:00:00.952) 0:07:23.433 ********** 2026-03-23 00:34:21.516029 | orchestrator | changed: [testbed-manager] 2026-03-23 00:34:21.516032 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:21.516036 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:21.516040 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:21.516044 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:21.516048 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:21.516051 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:21.516055 | orchestrator | 2026-03-23 00:34:21.516064 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-23 00:34:52.558907 | orchestrator | Monday 23 March 2026 00:34:21 +0000 (0:00:10.607) 0:07:34.041 ********** 2026-03-23 00:34:52.559004 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:52.559015 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:34:52.559023 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:34:52.559031 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:34:52.559038 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:34:52.559045 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:34:52.559053 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:34:52.559060 | orchestrator | 2026-03-23 00:34:52.559068 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-23 00:34:52.559142 | orchestrator | Monday 23 March 2026 00:34:23 +0000 (0:00:01.886) 0:07:35.928 ********** 2026-03-23 00:34:52.559154 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:34:52.559166 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:34:52.559179 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:34:52.559187 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:34:52.559194 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:34:52.559202 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:34:52.559210 | orchestrator | 2026-03-23 00:34:52.559218 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-23 00:34:52.559225 | orchestrator | Monday 23 March 2026 00:34:24 +0000 (0:00:01.541) 0:07:37.469 ********** 2026-03-23 00:34:52.559233 | orchestrator | changed: [testbed-manager] 2026-03-23 00:34:52.559241 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:52.559249 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:52.559256 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:52.559263 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:52.559271 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:52.559278 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:52.559285 | orchestrator | 2026-03-23 00:34:52.559292 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-23 00:34:52.559300 | orchestrator | 2026-03-23 00:34:52.559307 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-23 00:34:52.559314 | orchestrator | Monday 23 March 2026 00:34:26 +0000 (0:00:01.477) 0:07:38.947 ********** 2026-03-23 00:34:52.559321 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:34:52.559329 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:34:52.559361 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:34:52.559368 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:34:52.559375 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:34:52.559382 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:34:52.559390 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:34:52.559397 | orchestrator | 2026-03-23 00:34:52.559404 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-23 00:34:52.559411 | orchestrator | 2026-03-23 00:34:52.559418 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-23 00:34:52.559426 | orchestrator | Monday 23 March 2026 00:34:26 +0000 (0:00:00.543) 0:07:39.490 ********** 2026-03-23 00:34:52.559433 | orchestrator | changed: [testbed-manager] 2026-03-23 00:34:52.559440 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:52.559448 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:52.559456 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:52.559465 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:52.559486 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:52.559495 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:52.559504 | orchestrator | 2026-03-23 00:34:52.559512 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-23 00:34:52.559520 | orchestrator | Monday 23 March 2026 00:34:28 +0000 (0:00:01.356) 0:07:40.847 ********** 2026-03-23 00:34:52.559528 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:52.559537 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:34:52.559545 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:34:52.559553 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:34:52.559561 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:34:52.559570 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:34:52.559578 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:34:52.559586 | orchestrator | 2026-03-23 00:34:52.559595 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-23 00:34:52.559603 | orchestrator | Monday 23 March 2026 00:34:29 +0000 (0:00:01.588) 0:07:42.435 ********** 2026-03-23 00:34:52.559611 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:34:52.559619 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:34:52.559627 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:34:52.559636 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:34:52.559644 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:34:52.559652 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:34:52.559660 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:34:52.559668 | orchestrator | 2026-03-23 00:34:52.559676 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-23 00:34:52.559684 | orchestrator | Monday 23 March 2026 00:34:30 +0000 (0:00:00.507) 0:07:42.942 ********** 2026-03-23 00:34:52.559693 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:34:52.559704 | orchestrator | 2026-03-23 00:34:52.559712 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-23 00:34:52.559720 | orchestrator | Monday 23 March 2026 00:34:31 +0000 (0:00:00.805) 0:07:43.748 ********** 2026-03-23 00:34:52.559730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:34:52.559741 | orchestrator | 2026-03-23 00:34:52.559749 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-23 00:34:52.559758 | orchestrator | Monday 23 March 2026 00:34:32 +0000 (0:00:00.960) 0:07:44.709 ********** 2026-03-23 00:34:52.559766 | orchestrator | changed: [testbed-manager] 2026-03-23 00:34:52.559774 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:52.559782 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:52.559791 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:52.559806 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:52.559814 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:52.559821 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:52.559828 | orchestrator | 2026-03-23 00:34:52.559851 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-23 00:34:52.559859 | orchestrator | Monday 23 March 2026 00:34:41 +0000 (0:00:09.290) 0:07:53.999 ********** 2026-03-23 00:34:52.559866 | orchestrator | changed: [testbed-manager] 2026-03-23 00:34:52.559874 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:52.559881 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:52.559888 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:52.559895 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:52.559943 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:52.559950 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:52.559958 | orchestrator | 2026-03-23 00:34:52.559965 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-23 00:34:52.559973 | orchestrator | Monday 23 March 2026 00:34:42 +0000 (0:00:00.861) 0:07:54.861 ********** 2026-03-23 00:34:52.559980 | orchestrator | changed: [testbed-manager] 2026-03-23 00:34:52.559987 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:52.559995 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:52.560002 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:52.560009 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:52.560016 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:52.560024 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:52.560031 | orchestrator | 2026-03-23 00:34:52.560038 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-23 00:34:52.560046 | orchestrator | Monday 23 March 2026 00:34:43 +0000 (0:00:01.365) 0:07:56.227 ********** 2026-03-23 00:34:52.560053 | orchestrator | changed: [testbed-manager] 2026-03-23 00:34:52.560060 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:52.560068 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:52.560098 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:52.560106 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:52.560113 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:52.560121 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:52.560128 | orchestrator | 2026-03-23 00:34:52.560135 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-23 00:34:52.560143 | orchestrator | Monday 23 March 2026 00:34:45 +0000 (0:00:01.932) 0:07:58.159 ********** 2026-03-23 00:34:52.560150 | orchestrator | changed: [testbed-manager] 2026-03-23 00:34:52.560157 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:52.560165 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:52.560172 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:52.560179 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:52.560187 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:52.560194 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:52.560201 | orchestrator | 2026-03-23 00:34:52.560209 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-23 00:34:52.560216 | orchestrator | Monday 23 March 2026 00:34:46 +0000 (0:00:01.231) 0:07:59.390 ********** 2026-03-23 00:34:52.560224 | orchestrator | changed: [testbed-manager] 2026-03-23 00:34:52.560231 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:52.560239 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:52.560246 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:52.560254 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:52.560266 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:52.560274 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:52.560281 | orchestrator | 2026-03-23 00:34:52.560288 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-23 00:34:52.560296 | orchestrator | 2026-03-23 00:34:52.560304 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-23 00:34:52.560311 | orchestrator | Monday 23 March 2026 00:34:47 +0000 (0:00:01.103) 0:08:00.494 ********** 2026-03-23 00:34:52.560325 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:34:52.560332 | orchestrator | 2026-03-23 00:34:52.560340 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-23 00:34:52.560349 | orchestrator | Monday 23 March 2026 00:34:48 +0000 (0:00:00.930) 0:08:01.424 ********** 2026-03-23 00:34:52.560361 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:52.560372 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:34:52.560383 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:34:52.560393 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:34:52.560404 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:34:52.560415 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:34:52.560425 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:34:52.560438 | orchestrator | 2026-03-23 00:34:52.560450 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-23 00:34:52.560461 | orchestrator | Monday 23 March 2026 00:34:49 +0000 (0:00:00.808) 0:08:02.232 ********** 2026-03-23 00:34:52.560471 | orchestrator | changed: [testbed-manager] 2026-03-23 00:34:52.560481 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:52.560492 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:52.560502 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:52.560512 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:52.560525 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:52.560536 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:52.560546 | orchestrator | 2026-03-23 00:34:52.560557 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-23 00:34:52.560569 | orchestrator | Monday 23 March 2026 00:34:50 +0000 (0:00:01.247) 0:08:03.480 ********** 2026-03-23 00:34:52.560580 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:34:52.560593 | orchestrator | 2026-03-23 00:34:52.560605 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-23 00:34:52.560616 | orchestrator | Monday 23 March 2026 00:34:51 +0000 (0:00:00.788) 0:08:04.269 ********** 2026-03-23 00:34:52.560627 | orchestrator | ok: [testbed-manager] 2026-03-23 00:34:52.560640 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:34:52.560652 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:34:52.560664 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:34:52.560676 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:34:52.560689 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:34:52.560701 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:34:52.560712 | orchestrator | 2026-03-23 00:34:52.560729 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-23 00:34:54.049875 | orchestrator | Monday 23 March 2026 00:34:52 +0000 (0:00:00.813) 0:08:05.083 ********** 2026-03-23 00:34:54.049973 | orchestrator | changed: [testbed-manager] 2026-03-23 00:34:54.049990 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:34:54.050001 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:34:54.050012 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:34:54.050214 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:34:54.050222 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:34:54.050229 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:34:54.050235 | orchestrator | 2026-03-23 00:34:54.050243 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:34:54.050251 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-23 00:34:54.050259 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-23 00:34:54.050265 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-23 00:34:54.050296 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-23 00:34:54.050303 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-23 00:34:54.050309 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-23 00:34:54.050315 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-23 00:34:54.050322 | orchestrator | 2026-03-23 00:34:54.050328 | orchestrator | 2026-03-23 00:34:54.050334 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:34:54.050341 | orchestrator | Monday 23 March 2026 00:34:53 +0000 (0:00:01.223) 0:08:06.306 ********** 2026-03-23 00:34:54.050348 | orchestrator | =============================================================================== 2026-03-23 00:34:54.050358 | orchestrator | osism.commons.packages : Install required packages --------------------- 70.84s 2026-03-23 00:34:54.050367 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.34s 2026-03-23 00:34:54.050374 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.12s 2026-03-23 00:34:54.050392 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.15s 2026-03-23 00:34:54.050398 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.54s 2026-03-23 00:34:54.050405 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.16s 2026-03-23 00:34:54.050411 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.67s 2026-03-23 00:34:54.050417 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.61s 2026-03-23 00:34:54.050423 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.31s 2026-03-23 00:34:54.050431 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.94s 2026-03-23 00:34:54.050437 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.65s 2026-03-23 00:34:54.050443 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.51s 2026-03-23 00:34:54.050449 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.50s 2026-03-23 00:34:54.050456 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.29s 2026-03-23 00:34:54.050462 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.53s 2026-03-23 00:34:54.050468 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.80s 2026-03-23 00:34:54.050474 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.28s 2026-03-23 00:34:54.050480 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.33s 2026-03-23 00:34:54.050487 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.30s 2026-03-23 00:34:54.050493 | orchestrator | osism.commons.services : Populate service facts ------------------------- 4.92s 2026-03-23 00:34:54.214353 | orchestrator | + osism apply fail2ban 2026-03-23 00:35:05.741915 | orchestrator | 2026-03-23 00:35:05 | INFO  | Prepare task for execution of fail2ban. 2026-03-23 00:35:05.833247 | orchestrator | 2026-03-23 00:35:05 | INFO  | Task 5a7dca2e-1c43-4978-bf80-01e47e9fdbf3 (fail2ban) was prepared for execution. 2026-03-23 00:35:05.833328 | orchestrator | 2026-03-23 00:35:05 | INFO  | It takes a moment until task 5a7dca2e-1c43-4978-bf80-01e47e9fdbf3 (fail2ban) has been started and output is visible here. 2026-03-23 00:35:27.513619 | orchestrator | 2026-03-23 00:35:27.513731 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-23 00:35:27.513794 | orchestrator | 2026-03-23 00:35:27.513808 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-23 00:35:27.513819 | orchestrator | Monday 23 March 2026 00:35:09 +0000 (0:00:00.339) 0:00:00.339 ********** 2026-03-23 00:35:27.513832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:35:27.513846 | orchestrator | 2026-03-23 00:35:27.513858 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-23 00:35:27.513869 | orchestrator | Monday 23 March 2026 00:35:10 +0000 (0:00:01.133) 0:00:01.473 ********** 2026-03-23 00:35:27.513881 | orchestrator | changed: [testbed-manager] 2026-03-23 00:35:27.513893 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:35:27.513904 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:35:27.513915 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:35:27.513926 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:35:27.513937 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:35:27.513948 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:35:27.513959 | orchestrator | 2026-03-23 00:35:27.513970 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-23 00:35:27.513981 | orchestrator | Monday 23 March 2026 00:35:21 +0000 (0:00:11.569) 0:00:13.043 ********** 2026-03-23 00:35:27.513992 | orchestrator | changed: [testbed-manager] 2026-03-23 00:35:27.514003 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:35:27.514014 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:35:27.514198 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:35:27.514211 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:35:27.514223 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:35:27.514235 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:35:27.514248 | orchestrator | 2026-03-23 00:35:27.514261 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-23 00:35:27.514274 | orchestrator | Monday 23 March 2026 00:35:23 +0000 (0:00:01.556) 0:00:14.599 ********** 2026-03-23 00:35:27.514286 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:35:27.514300 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:35:27.514312 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:35:27.514324 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:35:27.514337 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:35:27.514349 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:35:27.514362 | orchestrator | ok: [testbed-manager] 2026-03-23 00:35:27.514374 | orchestrator | 2026-03-23 00:35:27.514387 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-23 00:35:27.514400 | orchestrator | Monday 23 March 2026 00:35:25 +0000 (0:00:01.990) 0:00:16.590 ********** 2026-03-23 00:35:27.514413 | orchestrator | changed: [testbed-manager] 2026-03-23 00:35:27.514425 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:35:27.514438 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:35:27.514450 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:35:27.514463 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:35:27.514476 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:35:27.514488 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:35:27.514500 | orchestrator | 2026-03-23 00:35:27.514513 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:35:27.514538 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:35:27.514551 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:35:27.514562 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:35:27.514573 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:35:27.514595 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:35:27.514606 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:35:27.514617 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:35:27.514628 | orchestrator | 2026-03-23 00:35:27.514638 | orchestrator | 2026-03-23 00:35:27.514649 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:35:27.514660 | orchestrator | Monday 23 March 2026 00:35:27 +0000 (0:00:01.697) 0:00:18.287 ********** 2026-03-23 00:35:27.514671 | orchestrator | =============================================================================== 2026-03-23 00:35:27.514682 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.57s 2026-03-23 00:35:27.514692 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.99s 2026-03-23 00:35:27.514703 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.70s 2026-03-23 00:35:27.514714 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.56s 2026-03-23 00:35:27.514725 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.13s 2026-03-23 00:35:27.680523 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-23 00:35:27.680630 | orchestrator | + osism apply network 2026-03-23 00:35:38.877638 | orchestrator | 2026-03-23 00:35:38 | INFO  | Prepare task for execution of network. 2026-03-23 00:35:38.946781 | orchestrator | 2026-03-23 00:35:38 | INFO  | Task 75638402-8fd4-414a-81fb-986c20f1ed87 (network) was prepared for execution. 2026-03-23 00:35:38.946861 | orchestrator | 2026-03-23 00:35:38 | INFO  | It takes a moment until task 75638402-8fd4-414a-81fb-986c20f1ed87 (network) has been started and output is visible here. 2026-03-23 00:36:04.504209 | orchestrator | 2026-03-23 00:36:04.504332 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-23 00:36:04.504356 | orchestrator | 2026-03-23 00:36:04.504373 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-23 00:36:04.504391 | orchestrator | Monday 23 March 2026 00:35:41 +0000 (0:00:00.299) 0:00:00.299 ********** 2026-03-23 00:36:04.504406 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:04.504423 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:36:04.504439 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:36:04.504453 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:36:04.504468 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:36:04.504482 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:36:04.504491 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:36:04.504500 | orchestrator | 2026-03-23 00:36:04.504509 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-23 00:36:04.504518 | orchestrator | Monday 23 March 2026 00:35:42 +0000 (0:00:00.546) 0:00:00.845 ********** 2026-03-23 00:36:04.504529 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:36:04.504540 | orchestrator | 2026-03-23 00:36:04.504549 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-23 00:36:04.504558 | orchestrator | Monday 23 March 2026 00:35:43 +0000 (0:00:01.089) 0:00:01.935 ********** 2026-03-23 00:36:04.504567 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:04.504576 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:36:04.504585 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:36:04.504593 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:36:04.504602 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:36:04.504634 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:36:04.504643 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:36:04.504652 | orchestrator | 2026-03-23 00:36:04.504661 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-23 00:36:04.504670 | orchestrator | Monday 23 March 2026 00:35:46 +0000 (0:00:02.658) 0:00:04.594 ********** 2026-03-23 00:36:04.504679 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:04.504688 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:36:04.504696 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:36:04.504705 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:36:04.504713 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:36:04.504722 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:36:04.504731 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:36:04.504739 | orchestrator | 2026-03-23 00:36:04.504749 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-23 00:36:04.504758 | orchestrator | Monday 23 March 2026 00:35:47 +0000 (0:00:01.507) 0:00:06.101 ********** 2026-03-23 00:36:04.504766 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-23 00:36:04.504776 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-23 00:36:04.504785 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-23 00:36:04.504794 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-23 00:36:04.504803 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-23 00:36:04.504811 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-23 00:36:04.504820 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-23 00:36:04.504829 | orchestrator | 2026-03-23 00:36:04.504838 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-23 00:36:04.504847 | orchestrator | Monday 23 March 2026 00:35:48 +0000 (0:00:01.085) 0:00:07.186 ********** 2026-03-23 00:36:04.504856 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-23 00:36:04.504866 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-23 00:36:04.504875 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 00:36:04.504883 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 00:36:04.504892 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-23 00:36:04.504901 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-23 00:36:04.504910 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-23 00:36:04.504919 | orchestrator | 2026-03-23 00:36:04.504927 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-23 00:36:04.504936 | orchestrator | Monday 23 March 2026 00:35:51 +0000 (0:00:03.013) 0:00:10.200 ********** 2026-03-23 00:36:04.504945 | orchestrator | changed: [testbed-manager] 2026-03-23 00:36:04.504955 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:36:04.504966 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:36:04.504976 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:36:04.504987 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:36:04.504998 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:36:04.505032 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:36:04.505044 | orchestrator | 2026-03-23 00:36:04.505073 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-23 00:36:04.505085 | orchestrator | Monday 23 March 2026 00:35:53 +0000 (0:00:01.483) 0:00:11.683 ********** 2026-03-23 00:36:04.505096 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 00:36:04.505107 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 00:36:04.505117 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-23 00:36:04.505128 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-23 00:36:04.505139 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-23 00:36:04.505150 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-23 00:36:04.505161 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-23 00:36:04.505172 | orchestrator | 2026-03-23 00:36:04.505183 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-23 00:36:04.505194 | orchestrator | Monday 23 March 2026 00:35:54 +0000 (0:00:01.560) 0:00:13.244 ********** 2026-03-23 00:36:04.505214 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:04.505225 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:36:04.505236 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:36:04.505247 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:36:04.505266 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:36:04.505286 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:36:04.505305 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:36:04.505325 | orchestrator | 2026-03-23 00:36:04.505345 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-23 00:36:04.505382 | orchestrator | Monday 23 March 2026 00:35:55 +0000 (0:00:00.855) 0:00:14.099 ********** 2026-03-23 00:36:04.505394 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:36:04.505405 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:36:04.505417 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:36:04.505428 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:36:04.505439 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:36:04.505450 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:36:04.505461 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:36:04.505471 | orchestrator | 2026-03-23 00:36:04.505483 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-23 00:36:04.505494 | orchestrator | Monday 23 March 2026 00:35:56 +0000 (0:00:00.752) 0:00:14.851 ********** 2026-03-23 00:36:04.505505 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:04.505516 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:36:04.505526 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:36:04.505538 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:36:04.505548 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:36:04.505559 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:36:04.505570 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:36:04.505581 | orchestrator | 2026-03-23 00:36:04.505592 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-23 00:36:04.505603 | orchestrator | Monday 23 March 2026 00:35:58 +0000 (0:00:02.020) 0:00:16.871 ********** 2026-03-23 00:36:04.505614 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:36:04.505625 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:36:04.505636 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:36:04.505647 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:36:04.505658 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:36:04.505668 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:36:04.505680 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-03-23 00:36:04.505693 | orchestrator | 2026-03-23 00:36:04.505704 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-23 00:36:04.505715 | orchestrator | Monday 23 March 2026 00:35:59 +0000 (0:00:00.784) 0:00:17.656 ********** 2026-03-23 00:36:04.505726 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:04.505737 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:36:04.505748 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:36:04.505759 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:36:04.505769 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:36:04.505781 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:36:04.505791 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:36:04.505802 | orchestrator | 2026-03-23 00:36:04.505813 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-23 00:36:04.505824 | orchestrator | Monday 23 March 2026 00:36:00 +0000 (0:00:01.390) 0:00:19.046 ********** 2026-03-23 00:36:04.505843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:36:04.505856 | orchestrator | 2026-03-23 00:36:04.505867 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-23 00:36:04.505878 | orchestrator | Monday 23 March 2026 00:36:01 +0000 (0:00:01.085) 0:00:20.132 ********** 2026-03-23 00:36:04.505897 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:04.505908 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:36:04.505919 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:36:04.505930 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:36:04.505941 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:36:04.505952 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:36:04.505963 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:36:04.505974 | orchestrator | 2026-03-23 00:36:04.505985 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-23 00:36:04.505996 | orchestrator | Monday 23 March 2026 00:36:02 +0000 (0:00:00.980) 0:00:21.112 ********** 2026-03-23 00:36:04.506094 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:04.506106 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:36:04.506117 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:36:04.506128 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:36:04.506139 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:36:04.506150 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:36:04.506160 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:36:04.506171 | orchestrator | 2026-03-23 00:36:04.506182 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-23 00:36:04.506194 | orchestrator | Monday 23 March 2026 00:36:03 +0000 (0:00:00.656) 0:00:21.768 ********** 2026-03-23 00:36:04.506205 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-23 00:36:04.506216 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-23 00:36:04.506227 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-23 00:36:04.506239 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-23 00:36:04.506250 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-23 00:36:04.506261 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-23 00:36:04.506271 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-23 00:36:04.506283 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-23 00:36:04.506294 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-23 00:36:04.506305 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-23 00:36:04.506315 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-23 00:36:04.506326 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-23 00:36:04.506337 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-23 00:36:04.506349 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-23 00:36:04.506360 | orchestrator | 2026-03-23 00:36:04.506380 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-23 00:36:19.453151 | orchestrator | Monday 23 March 2026 00:36:04 +0000 (0:00:01.033) 0:00:22.801 ********** 2026-03-23 00:36:19.453297 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:36:19.453321 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:36:19.453334 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:36:19.453345 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:36:19.453356 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:36:19.453367 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:36:19.453379 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:36:19.453390 | orchestrator | 2026-03-23 00:36:19.453402 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-23 00:36:19.453415 | orchestrator | Monday 23 March 2026 00:36:05 +0000 (0:00:00.671) 0:00:23.473 ********** 2026-03-23 00:36:19.453428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-1, testbed-manager, testbed-node-3, testbed-node-2, testbed-node-5, testbed-node-4 2026-03-23 00:36:19.453469 | orchestrator | 2026-03-23 00:36:19.453481 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-23 00:36:19.453492 | orchestrator | Monday 23 March 2026 00:36:09 +0000 (0:00:04.070) 0:00:27.544 ********** 2026-03-23 00:36:19.453505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-23 00:36:19.453517 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-23 00:36:19.453551 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-23 00:36:19.453580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-23 00:36:19.453594 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-23 00:36:19.453608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-23 00:36:19.453621 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-23 00:36:19.453635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-23 00:36:19.453648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-23 00:36:19.453668 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-23 00:36:19.453681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-23 00:36:19.453714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-23 00:36:19.453728 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-23 00:36:19.453766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-23 00:36:19.453780 | orchestrator | 2026-03-23 00:36:19.453793 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-23 00:36:19.453806 | orchestrator | Monday 23 March 2026 00:36:14 +0000 (0:00:05.108) 0:00:32.652 ********** 2026-03-23 00:36:19.453820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-23 00:36:19.453833 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-23 00:36:19.453846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-23 00:36:19.453859 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-23 00:36:19.453889 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-23 00:36:19.453903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-23 00:36:19.453917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-23 00:36:19.453930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-23 00:36:19.453942 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-23 00:36:19.453954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-23 00:36:19.453965 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-23 00:36:19.454002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-23 00:36:19.454106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-23 00:36:31.306826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-23 00:36:31.306939 | orchestrator | 2026-03-23 00:36:31.307006 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-23 00:36:31.307024 | orchestrator | Monday 23 March 2026 00:36:19 +0000 (0:00:05.253) 0:00:37.906 ********** 2026-03-23 00:36:31.307037 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:36:31.307049 | orchestrator | 2026-03-23 00:36:31.307061 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-23 00:36:31.307072 | orchestrator | Monday 23 March 2026 00:36:20 +0000 (0:00:01.067) 0:00:38.974 ********** 2026-03-23 00:36:31.307083 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:31.307096 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:36:31.307107 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:36:31.307118 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:36:31.307128 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:36:31.307139 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:36:31.307150 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:36:31.307161 | orchestrator | 2026-03-23 00:36:31.307172 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-23 00:36:31.307183 | orchestrator | Monday 23 March 2026 00:36:21 +0000 (0:00:00.979) 0:00:39.954 ********** 2026-03-23 00:36:31.307194 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-23 00:36:31.307205 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-23 00:36:31.307216 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-23 00:36:31.307227 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-23 00:36:31.307238 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-23 00:36:31.307249 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-23 00:36:31.307277 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-23 00:36:31.307289 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-23 00:36:31.307299 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:36:31.307312 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-23 00:36:31.307323 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-23 00:36:31.307336 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-23 00:36:31.307348 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:36:31.307361 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-23 00:36:31.307374 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-23 00:36:31.307386 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-23 00:36:31.307398 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-23 00:36:31.307411 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-23 00:36:31.307447 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:36:31.307460 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-23 00:36:31.307472 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-23 00:36:31.307484 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-23 00:36:31.307496 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:36:31.307509 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-23 00:36:31.307521 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-23 00:36:31.307534 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-23 00:36:31.307546 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-23 00:36:31.307558 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-23 00:36:31.307570 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:36:31.307582 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:36:31.307594 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-23 00:36:31.307607 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-23 00:36:31.307619 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-23 00:36:31.307631 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-23 00:36:31.307643 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:36:31.307654 | orchestrator | 2026-03-23 00:36:31.307665 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-23 00:36:31.307695 | orchestrator | Monday 23 March 2026 00:36:22 +0000 (0:00:00.644) 0:00:40.598 ********** 2026-03-23 00:36:31.307707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:36:31.307718 | orchestrator | 2026-03-23 00:36:31.307730 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-23 00:36:31.307740 | orchestrator | Monday 23 March 2026 00:36:23 +0000 (0:00:01.076) 0:00:41.675 ********** 2026-03-23 00:36:31.307751 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:36:31.307762 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:36:31.307774 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:36:31.307785 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:36:31.307795 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:36:31.307806 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:36:31.307817 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:36:31.307828 | orchestrator | 2026-03-23 00:36:31.307839 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-23 00:36:31.307850 | orchestrator | Monday 23 March 2026 00:36:24 +0000 (0:00:00.716) 0:00:42.391 ********** 2026-03-23 00:36:31.307861 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:36:31.307872 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:36:31.307883 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:36:31.307893 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:36:31.307904 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:36:31.307915 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:36:31.307926 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:36:31.307937 | orchestrator | 2026-03-23 00:36:31.307947 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-23 00:36:31.307984 | orchestrator | Monday 23 March 2026 00:36:24 +0000 (0:00:00.616) 0:00:43.008 ********** 2026-03-23 00:36:31.307997 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:36:31.308017 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:36:31.308028 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:36:31.308039 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:36:31.308050 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:36:31.308061 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:36:31.308071 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:36:31.308083 | orchestrator | 2026-03-23 00:36:31.308094 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-23 00:36:31.308105 | orchestrator | Monday 23 March 2026 00:36:25 +0000 (0:00:00.763) 0:00:43.771 ********** 2026-03-23 00:36:31.308116 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:36:31.308127 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:31.308144 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:36:31.308155 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:36:31.308166 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:36:31.308182 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:36:31.308200 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:36:31.308215 | orchestrator | 2026-03-23 00:36:31.308235 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-23 00:36:31.308254 | orchestrator | Monday 23 March 2026 00:36:26 +0000 (0:00:01.516) 0:00:45.288 ********** 2026-03-23 00:36:31.308272 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:31.308287 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:36:31.308298 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:36:31.308309 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:36:31.308319 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:36:31.308330 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:36:31.308341 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:36:31.308352 | orchestrator | 2026-03-23 00:36:31.308363 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-23 00:36:31.308374 | orchestrator | Monday 23 March 2026 00:36:28 +0000 (0:00:01.207) 0:00:46.496 ********** 2026-03-23 00:36:31.308385 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:31.308395 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:36:31.308406 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:36:31.308417 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:36:31.308428 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:36:31.308438 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:36:31.308449 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:36:31.308460 | orchestrator | 2026-03-23 00:36:31.308470 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-23 00:36:31.308481 | orchestrator | Monday 23 March 2026 00:36:30 +0000 (0:00:01.996) 0:00:48.492 ********** 2026-03-23 00:36:31.308492 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:36:31.308503 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:36:31.308514 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:36:31.308525 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:36:31.308536 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:36:31.308547 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:36:31.308558 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:36:31.308568 | orchestrator | 2026-03-23 00:36:31.308579 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-23 00:36:31.308590 | orchestrator | Monday 23 March 2026 00:36:30 +0000 (0:00:00.541) 0:00:49.034 ********** 2026-03-23 00:36:31.308601 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:36:31.308612 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:36:31.308622 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:36:31.308633 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:36:31.308644 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:36:31.308655 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:36:31.308665 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:36:31.308694 | orchestrator | 2026-03-23 00:36:31.308706 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:36:31.308718 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-23 00:36:31.308737 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-23 00:36:31.308758 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-23 00:36:31.493532 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-23 00:36:31.493650 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-23 00:36:31.493672 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-23 00:36:31.493690 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-23 00:36:31.493707 | orchestrator | 2026-03-23 00:36:31.493724 | orchestrator | 2026-03-23 00:36:31.493740 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:36:31.493759 | orchestrator | Monday 23 March 2026 00:36:31 +0000 (0:00:00.569) 0:00:49.603 ********** 2026-03-23 00:36:31.493775 | orchestrator | =============================================================================== 2026-03-23 00:36:31.493791 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.25s 2026-03-23 00:36:31.493807 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.11s 2026-03-23 00:36:31.493824 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.07s 2026-03-23 00:36:31.493840 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.01s 2026-03-23 00:36:31.493856 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.66s 2026-03-23 00:36:31.493872 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.02s 2026-03-23 00:36:31.493887 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.00s 2026-03-23 00:36:31.493903 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.56s 2026-03-23 00:36:31.493921 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.52s 2026-03-23 00:36:31.493937 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.51s 2026-03-23 00:36:31.493954 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.48s 2026-03-23 00:36:31.494001 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.39s 2026-03-23 00:36:31.494081 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.21s 2026-03-23 00:36:31.494106 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.09s 2026-03-23 00:36:31.494124 | orchestrator | osism.commons.network : Create required directories --------------------- 1.09s 2026-03-23 00:36:31.494140 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.09s 2026-03-23 00:36:31.494158 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.08s 2026-03-23 00:36:31.494176 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.07s 2026-03-23 00:36:31.494194 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.03s 2026-03-23 00:36:31.494210 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2026-03-23 00:36:31.617009 | orchestrator | + osism apply wireguard 2026-03-23 00:36:42.766604 | orchestrator | 2026-03-23 00:36:42 | INFO  | Prepare task for execution of wireguard. 2026-03-23 00:36:42.832106 | orchestrator | 2026-03-23 00:36:42 | INFO  | Task 8671b939-fc60-4c2c-8384-a90e6f6f0731 (wireguard) was prepared for execution. 2026-03-23 00:36:42.832220 | orchestrator | 2026-03-23 00:36:42 | INFO  | It takes a moment until task 8671b939-fc60-4c2c-8384-a90e6f6f0731 (wireguard) has been started and output is visible here. 2026-03-23 00:36:59.305012 | orchestrator | 2026-03-23 00:36:59.305119 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-23 00:36:59.305134 | orchestrator | 2026-03-23 00:36:59.305145 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-23 00:36:59.305156 | orchestrator | Monday 23 March 2026 00:36:45 +0000 (0:00:00.214) 0:00:00.214 ********** 2026-03-23 00:36:59.305166 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:59.305177 | orchestrator | 2026-03-23 00:36:59.305187 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-23 00:36:59.305197 | orchestrator | Monday 23 March 2026 00:36:47 +0000 (0:00:01.441) 0:00:01.655 ********** 2026-03-23 00:36:59.305207 | orchestrator | changed: [testbed-manager] 2026-03-23 00:36:59.305218 | orchestrator | 2026-03-23 00:36:59.305228 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-23 00:36:59.305242 | orchestrator | Monday 23 March 2026 00:36:52 +0000 (0:00:05.118) 0:00:06.774 ********** 2026-03-23 00:36:59.305258 | orchestrator | changed: [testbed-manager] 2026-03-23 00:36:59.305271 | orchestrator | 2026-03-23 00:36:59.305286 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-23 00:36:59.305302 | orchestrator | Monday 23 March 2026 00:36:52 +0000 (0:00:00.485) 0:00:07.259 ********** 2026-03-23 00:36:59.305319 | orchestrator | changed: [testbed-manager] 2026-03-23 00:36:59.305336 | orchestrator | 2026-03-23 00:36:59.305353 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-23 00:36:59.305370 | orchestrator | Monday 23 March 2026 00:36:53 +0000 (0:00:00.395) 0:00:07.655 ********** 2026-03-23 00:36:59.305386 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:59.305402 | orchestrator | 2026-03-23 00:36:59.305419 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-23 00:36:59.305435 | orchestrator | Monday 23 March 2026 00:36:53 +0000 (0:00:00.475) 0:00:08.131 ********** 2026-03-23 00:36:59.305451 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:59.305468 | orchestrator | 2026-03-23 00:36:59.305485 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-23 00:36:59.305503 | orchestrator | Monday 23 March 2026 00:36:53 +0000 (0:00:00.391) 0:00:08.522 ********** 2026-03-23 00:36:59.305517 | orchestrator | ok: [testbed-manager] 2026-03-23 00:36:59.305528 | orchestrator | 2026-03-23 00:36:59.305539 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-23 00:36:59.305550 | orchestrator | Monday 23 March 2026 00:36:54 +0000 (0:00:00.396) 0:00:08.918 ********** 2026-03-23 00:36:59.305561 | orchestrator | changed: [testbed-manager] 2026-03-23 00:36:59.305573 | orchestrator | 2026-03-23 00:36:59.305584 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-23 00:36:59.305595 | orchestrator | Monday 23 March 2026 00:36:55 +0000 (0:00:01.169) 0:00:10.088 ********** 2026-03-23 00:36:59.305607 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-23 00:36:59.305618 | orchestrator | changed: [testbed-manager] 2026-03-23 00:36:59.305629 | orchestrator | 2026-03-23 00:36:59.305640 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-23 00:36:59.305651 | orchestrator | Monday 23 March 2026 00:36:56 +0000 (0:00:00.879) 0:00:10.967 ********** 2026-03-23 00:36:59.305685 | orchestrator | changed: [testbed-manager] 2026-03-23 00:36:59.305696 | orchestrator | 2026-03-23 00:36:59.305708 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-23 00:36:59.305719 | orchestrator | Monday 23 March 2026 00:36:58 +0000 (0:00:01.898) 0:00:12.865 ********** 2026-03-23 00:36:59.305730 | orchestrator | changed: [testbed-manager] 2026-03-23 00:36:59.305741 | orchestrator | 2026-03-23 00:36:59.305752 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:36:59.305786 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:36:59.305798 | orchestrator | 2026-03-23 00:36:59.305808 | orchestrator | 2026-03-23 00:36:59.305818 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:36:59.305827 | orchestrator | Monday 23 March 2026 00:36:59 +0000 (0:00:00.824) 0:00:13.690 ********** 2026-03-23 00:36:59.305837 | orchestrator | =============================================================================== 2026-03-23 00:36:59.305847 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.12s 2026-03-23 00:36:59.305862 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.90s 2026-03-23 00:36:59.305871 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.44s 2026-03-23 00:36:59.305881 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2026-03-23 00:36:59.305906 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.88s 2026-03-23 00:36:59.305916 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.82s 2026-03-23 00:36:59.305952 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.49s 2026-03-23 00:36:59.305965 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.48s 2026-03-23 00:36:59.305975 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2026-03-23 00:36:59.305985 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2026-03-23 00:36:59.305994 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.39s 2026-03-23 00:36:59.427613 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-23 00:36:59.461488 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-23 00:36:59.461575 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-23 00:36:59.534611 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 204 0 --:--:-- --:--:-- --:--:-- 205 2026-03-23 00:36:59.549755 | orchestrator | + osism apply --environment custom workarounds 2026-03-23 00:37:00.677367 | orchestrator | 2026-03-23 00:37:00 | INFO  | Trying to run play workarounds in environment custom 2026-03-23 00:37:10.778435 | orchestrator | 2026-03-23 00:37:10 | INFO  | Prepare task for execution of workarounds. 2026-03-23 00:37:10.858503 | orchestrator | 2026-03-23 00:37:10 | INFO  | Task 3c2627b8-8c78-4c2c-b84d-b977cba2b375 (workarounds) was prepared for execution. 2026-03-23 00:37:10.858590 | orchestrator | 2026-03-23 00:37:10 | INFO  | It takes a moment until task 3c2627b8-8c78-4c2c-b84d-b977cba2b375 (workarounds) has been started and output is visible here. 2026-03-23 00:37:34.654586 | orchestrator | 2026-03-23 00:37:34.654731 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 00:37:34.654749 | orchestrator | 2026-03-23 00:37:34.654761 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-23 00:37:34.654773 | orchestrator | Monday 23 March 2026 00:37:13 +0000 (0:00:00.135) 0:00:00.135 ********** 2026-03-23 00:37:34.654786 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-23 00:37:34.654798 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-23 00:37:34.654809 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-23 00:37:34.654820 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-23 00:37:34.654831 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-23 00:37:34.654842 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-23 00:37:34.654854 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-23 00:37:34.654918 | orchestrator | 2026-03-23 00:37:34.654931 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-23 00:37:34.654942 | orchestrator | 2026-03-23 00:37:34.654953 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-23 00:37:34.654964 | orchestrator | Monday 23 March 2026 00:37:14 +0000 (0:00:00.566) 0:00:00.702 ********** 2026-03-23 00:37:34.654975 | orchestrator | ok: [testbed-manager] 2026-03-23 00:37:34.654988 | orchestrator | 2026-03-23 00:37:34.654999 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-23 00:37:34.655010 | orchestrator | 2026-03-23 00:37:34.655021 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-23 00:37:34.655032 | orchestrator | Monday 23 March 2026 00:37:16 +0000 (0:00:02.376) 0:00:03.078 ********** 2026-03-23 00:37:34.655043 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:37:34.655054 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:37:34.655065 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:37:34.655076 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:37:34.655087 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:37:34.655100 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:37:34.655113 | orchestrator | 2026-03-23 00:37:34.655125 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-23 00:37:34.655137 | orchestrator | 2026-03-23 00:37:34.655150 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-23 00:37:34.655163 | orchestrator | Monday 23 March 2026 00:37:19 +0000 (0:00:02.261) 0:00:05.339 ********** 2026-03-23 00:37:34.655177 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-23 00:37:34.655191 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-23 00:37:34.655204 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-23 00:37:34.655217 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-23 00:37:34.655230 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-23 00:37:34.655248 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-23 00:37:34.655260 | orchestrator | 2026-03-23 00:37:34.655271 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-23 00:37:34.655282 | orchestrator | Monday 23 March 2026 00:37:20 +0000 (0:00:01.302) 0:00:06.642 ********** 2026-03-23 00:37:34.655294 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:37:34.655306 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:37:34.655317 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:37:34.655328 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:37:34.655339 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:37:34.655350 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:37:34.655361 | orchestrator | 2026-03-23 00:37:34.655372 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-23 00:37:34.655383 | orchestrator | Monday 23 March 2026 00:37:24 +0000 (0:00:03.935) 0:00:10.577 ********** 2026-03-23 00:37:34.655394 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:37:34.655404 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:37:34.655415 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:37:34.655426 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:37:34.655436 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:37:34.655447 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:37:34.655458 | orchestrator | 2026-03-23 00:37:34.655469 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-23 00:37:34.655480 | orchestrator | 2026-03-23 00:37:34.655491 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-23 00:37:34.655502 | orchestrator | Monday 23 March 2026 00:37:24 +0000 (0:00:00.506) 0:00:11.084 ********** 2026-03-23 00:37:34.655519 | orchestrator | changed: [testbed-manager] 2026-03-23 00:37:34.655530 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:37:34.655541 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:37:34.655552 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:37:34.655562 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:37:34.655573 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:37:34.655584 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:37:34.655595 | orchestrator | 2026-03-23 00:37:34.655606 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-23 00:37:34.655617 | orchestrator | Monday 23 March 2026 00:37:26 +0000 (0:00:01.756) 0:00:12.840 ********** 2026-03-23 00:37:34.655628 | orchestrator | changed: [testbed-manager] 2026-03-23 00:37:34.655638 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:37:34.655649 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:37:34.655660 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:37:34.655671 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:37:34.655681 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:37:34.655710 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:37:34.655722 | orchestrator | 2026-03-23 00:37:34.655733 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-23 00:37:34.655744 | orchestrator | Monday 23 March 2026 00:37:28 +0000 (0:00:01.458) 0:00:14.298 ********** 2026-03-23 00:37:34.655755 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:37:34.655766 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:37:34.655777 | orchestrator | ok: [testbed-manager] 2026-03-23 00:37:34.655788 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:37:34.655798 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:37:34.655809 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:37:34.655820 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:37:34.655831 | orchestrator | 2026-03-23 00:37:34.655841 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-23 00:37:34.655852 | orchestrator | Monday 23 March 2026 00:37:29 +0000 (0:00:01.626) 0:00:15.925 ********** 2026-03-23 00:37:34.655863 | orchestrator | changed: [testbed-manager] 2026-03-23 00:37:34.655874 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:37:34.655901 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:37:34.655913 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:37:34.655924 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:37:34.655935 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:37:34.655945 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:37:34.655956 | orchestrator | 2026-03-23 00:37:34.655967 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-23 00:37:34.655978 | orchestrator | Monday 23 March 2026 00:37:31 +0000 (0:00:01.536) 0:00:17.461 ********** 2026-03-23 00:37:34.655989 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:37:34.655999 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:37:34.656010 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:37:34.656021 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:37:34.656031 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:37:34.656042 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:37:34.656053 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:37:34.656064 | orchestrator | 2026-03-23 00:37:34.656074 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-23 00:37:34.656085 | orchestrator | 2026-03-23 00:37:34.656096 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-23 00:37:34.656107 | orchestrator | Monday 23 March 2026 00:37:31 +0000 (0:00:00.622) 0:00:18.083 ********** 2026-03-23 00:37:34.656118 | orchestrator | ok: [testbed-manager] 2026-03-23 00:37:34.656129 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:37:34.656139 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:37:34.656150 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:37:34.656161 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:37:34.656171 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:37:34.656188 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:37:34.656199 | orchestrator | 2026-03-23 00:37:34.656210 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:37:34.656222 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:37:34.656235 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:37:34.656246 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:37:34.656261 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:37:34.656273 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:37:34.656284 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:37:34.656295 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:37:34.656305 | orchestrator | 2026-03-23 00:37:34.656316 | orchestrator | 2026-03-23 00:37:34.656327 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:37:34.656338 | orchestrator | Monday 23 March 2026 00:37:34 +0000 (0:00:02.760) 0:00:20.843 ********** 2026-03-23 00:37:34.656349 | orchestrator | =============================================================================== 2026-03-23 00:37:34.656360 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.94s 2026-03-23 00:37:34.656371 | orchestrator | Install python3-docker -------------------------------------------------- 2.76s 2026-03-23 00:37:34.656382 | orchestrator | Apply netplan configuration --------------------------------------------- 2.38s 2026-03-23 00:37:34.656393 | orchestrator | Apply netplan configuration --------------------------------------------- 2.26s 2026-03-23 00:37:34.656404 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.76s 2026-03-23 00:37:34.656415 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.63s 2026-03-23 00:37:34.656426 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.54s 2026-03-23 00:37:34.656437 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.46s 2026-03-23 00:37:34.656448 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.30s 2026-03-23 00:37:34.656459 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2026-03-23 00:37:34.656470 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.57s 2026-03-23 00:37:34.656488 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.51s 2026-03-23 00:37:34.967978 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-23 00:37:46.138278 | orchestrator | 2026-03-23 00:37:46 | INFO  | Prepare task for execution of reboot. 2026-03-23 00:37:46.238948 | orchestrator | 2026-03-23 00:37:46 | INFO  | Task b4c9b3b7-1734-477b-9d84-969698b5b1a6 (reboot) was prepared for execution. 2026-03-23 00:37:46.239029 | orchestrator | 2026-03-23 00:37:46 | INFO  | It takes a moment until task b4c9b3b7-1734-477b-9d84-969698b5b1a6 (reboot) has been started and output is visible here. 2026-03-23 00:37:57.294807 | orchestrator | 2026-03-23 00:37:57.294943 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-23 00:37:57.294954 | orchestrator | 2026-03-23 00:37:57.294960 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-23 00:37:57.294983 | orchestrator | Monday 23 March 2026 00:37:49 +0000 (0:00:00.294) 0:00:00.294 ********** 2026-03-23 00:37:57.294988 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:37:57.294994 | orchestrator | 2026-03-23 00:37:57.294999 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-23 00:37:57.295004 | orchestrator | Monday 23 March 2026 00:37:49 +0000 (0:00:00.154) 0:00:00.449 ********** 2026-03-23 00:37:57.295010 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:37:57.295015 | orchestrator | 2026-03-23 00:37:57.295019 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-23 00:37:57.295024 | orchestrator | Monday 23 March 2026 00:37:50 +0000 (0:00:01.244) 0:00:01.694 ********** 2026-03-23 00:37:57.295029 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:37:57.295034 | orchestrator | 2026-03-23 00:37:57.295039 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-23 00:37:57.295043 | orchestrator | 2026-03-23 00:37:57.295048 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-23 00:37:57.295053 | orchestrator | Monday 23 March 2026 00:37:50 +0000 (0:00:00.109) 0:00:01.803 ********** 2026-03-23 00:37:57.295057 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:37:57.295062 | orchestrator | 2026-03-23 00:37:57.295067 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-23 00:37:57.295072 | orchestrator | Monday 23 March 2026 00:37:51 +0000 (0:00:00.099) 0:00:01.902 ********** 2026-03-23 00:37:57.295076 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:37:57.295081 | orchestrator | 2026-03-23 00:37:57.295086 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-23 00:37:57.295090 | orchestrator | Monday 23 March 2026 00:37:52 +0000 (0:00:01.007) 0:00:02.910 ********** 2026-03-23 00:37:57.295095 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:37:57.295100 | orchestrator | 2026-03-23 00:37:57.295105 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-23 00:37:57.295110 | orchestrator | 2026-03-23 00:37:57.295114 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-23 00:37:57.295119 | orchestrator | Monday 23 March 2026 00:37:52 +0000 (0:00:00.104) 0:00:03.014 ********** 2026-03-23 00:37:57.295124 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:37:57.295128 | orchestrator | 2026-03-23 00:37:57.295133 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-23 00:37:57.295138 | orchestrator | Monday 23 March 2026 00:37:52 +0000 (0:00:00.106) 0:00:03.121 ********** 2026-03-23 00:37:57.295153 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:37:57.295158 | orchestrator | 2026-03-23 00:37:57.295163 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-23 00:37:57.295167 | orchestrator | Monday 23 March 2026 00:37:53 +0000 (0:00:01.100) 0:00:04.222 ********** 2026-03-23 00:37:57.295172 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:37:57.295177 | orchestrator | 2026-03-23 00:37:57.295182 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-23 00:37:57.295186 | orchestrator | 2026-03-23 00:37:57.295191 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-23 00:37:57.295196 | orchestrator | Monday 23 March 2026 00:37:53 +0000 (0:00:00.122) 0:00:04.344 ********** 2026-03-23 00:37:57.295201 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:37:57.295205 | orchestrator | 2026-03-23 00:37:57.295210 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-23 00:37:57.295215 | orchestrator | Monday 23 March 2026 00:37:53 +0000 (0:00:00.090) 0:00:04.435 ********** 2026-03-23 00:37:57.295219 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:37:57.295224 | orchestrator | 2026-03-23 00:37:57.295229 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-23 00:37:57.295233 | orchestrator | Monday 23 March 2026 00:37:54 +0000 (0:00:00.999) 0:00:05.435 ********** 2026-03-23 00:37:57.295238 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:37:57.295247 | orchestrator | 2026-03-23 00:37:57.295251 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-23 00:37:57.295256 | orchestrator | 2026-03-23 00:37:57.295261 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-23 00:37:57.295266 | orchestrator | Monday 23 March 2026 00:37:54 +0000 (0:00:00.103) 0:00:05.538 ********** 2026-03-23 00:37:57.295271 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:37:57.295275 | orchestrator | 2026-03-23 00:37:57.295283 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-23 00:37:57.295291 | orchestrator | Monday 23 March 2026 00:37:54 +0000 (0:00:00.220) 0:00:05.759 ********** 2026-03-23 00:37:57.295296 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:37:57.295301 | orchestrator | 2026-03-23 00:37:57.295305 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-23 00:37:57.295310 | orchestrator | Monday 23 March 2026 00:37:55 +0000 (0:00:01.003) 0:00:06.762 ********** 2026-03-23 00:37:57.295315 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:37:57.295319 | orchestrator | 2026-03-23 00:37:57.295324 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-23 00:37:57.295329 | orchestrator | 2026-03-23 00:37:57.295333 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-23 00:37:57.295338 | orchestrator | Monday 23 March 2026 00:37:55 +0000 (0:00:00.109) 0:00:06.871 ********** 2026-03-23 00:37:57.295343 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:37:57.295348 | orchestrator | 2026-03-23 00:37:57.295352 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-23 00:37:57.295357 | orchestrator | Monday 23 March 2026 00:37:56 +0000 (0:00:00.097) 0:00:06.968 ********** 2026-03-23 00:37:57.295362 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:37:57.295366 | orchestrator | 2026-03-23 00:37:57.295371 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-23 00:37:57.295376 | orchestrator | Monday 23 March 2026 00:37:57 +0000 (0:00:01.005) 0:00:07.974 ********** 2026-03-23 00:37:57.295391 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:37:57.295397 | orchestrator | 2026-03-23 00:37:57.295401 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:37:57.295407 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:37:57.295412 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:37:57.295417 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:37:57.295422 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:37:57.295427 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:37:57.295431 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:37:57.295436 | orchestrator | 2026-03-23 00:37:57.295441 | orchestrator | 2026-03-23 00:37:57.295446 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:37:57.295450 | orchestrator | Monday 23 March 2026 00:37:57 +0000 (0:00:00.031) 0:00:08.006 ********** 2026-03-23 00:37:57.295455 | orchestrator | =============================================================================== 2026-03-23 00:37:57.295460 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.36s 2026-03-23 00:37:57.295465 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.77s 2026-03-23 00:37:57.295473 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.58s 2026-03-23 00:37:57.415169 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-23 00:38:08.658798 | orchestrator | 2026-03-23 00:38:08 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-23 00:38:08.726477 | orchestrator | 2026-03-23 00:38:08 | INFO  | Task 0598d2ec-4c8a-4881-a23e-1a055b094e88 (wait-for-connection) was prepared for execution. 2026-03-23 00:38:08.726575 | orchestrator | 2026-03-23 00:38:08 | INFO  | It takes a moment until task 0598d2ec-4c8a-4881-a23e-1a055b094e88 (wait-for-connection) has been started and output is visible here. 2026-03-23 00:38:23.399775 | orchestrator | 2026-03-23 00:38:23.399879 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-23 00:38:23.399890 | orchestrator | 2026-03-23 00:38:23.399898 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-23 00:38:23.399905 | orchestrator | Monday 23 March 2026 00:38:11 +0000 (0:00:00.278) 0:00:00.278 ********** 2026-03-23 00:38:23.399912 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:38:23.399921 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:38:23.399927 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:38:23.399934 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:38:23.399940 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:38:23.399946 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:38:23.399952 | orchestrator | 2026-03-23 00:38:23.399958 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:38:23.399965 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:38:23.399973 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:38:23.399980 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:38:23.399989 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:38:23.399995 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:38:23.400002 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:38:23.400008 | orchestrator | 2026-03-23 00:38:23.400014 | orchestrator | 2026-03-23 00:38:23.400021 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:38:23.400028 | orchestrator | Monday 23 March 2026 00:38:23 +0000 (0:00:11.513) 0:00:11.791 ********** 2026-03-23 00:38:23.400035 | orchestrator | =============================================================================== 2026-03-23 00:38:23.400042 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.51s 2026-03-23 00:38:23.567926 | orchestrator | + osism apply hddtemp 2026-03-23 00:38:34.781718 | orchestrator | 2026-03-23 00:38:34 | INFO  | Prepare task for execution of hddtemp. 2026-03-23 00:38:34.857958 | orchestrator | 2026-03-23 00:38:34 | INFO  | Task 52ecc691-6310-4608-bc4f-1edc62f7b665 (hddtemp) was prepared for execution. 2026-03-23 00:38:34.858112 | orchestrator | 2026-03-23 00:38:34 | INFO  | It takes a moment until task 52ecc691-6310-4608-bc4f-1edc62f7b665 (hddtemp) has been started and output is visible here. 2026-03-23 00:39:01.589102 | orchestrator | 2026-03-23 00:39:01.589223 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-23 00:39:01.589239 | orchestrator | 2026-03-23 00:39:01.589250 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-23 00:39:01.589260 | orchestrator | Monday 23 March 2026 00:38:37 +0000 (0:00:00.326) 0:00:00.326 ********** 2026-03-23 00:39:01.589298 | orchestrator | ok: [testbed-manager] 2026-03-23 00:39:01.589310 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:39:01.589320 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:39:01.589330 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:39:01.589340 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:39:01.589350 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:39:01.589360 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:39:01.589370 | orchestrator | 2026-03-23 00:39:01.589381 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-23 00:39:01.589390 | orchestrator | Monday 23 March 2026 00:38:38 +0000 (0:00:00.588) 0:00:00.914 ********** 2026-03-23 00:39:01.589402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:39:01.589414 | orchestrator | 2026-03-23 00:39:01.589424 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-23 00:39:01.589434 | orchestrator | Monday 23 March 2026 00:38:39 +0000 (0:00:01.162) 0:00:02.077 ********** 2026-03-23 00:39:01.589444 | orchestrator | ok: [testbed-manager] 2026-03-23 00:39:01.589454 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:39:01.589464 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:39:01.589473 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:39:01.589483 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:39:01.589493 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:39:01.589503 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:39:01.589513 | orchestrator | 2026-03-23 00:39:01.589523 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-23 00:39:01.589533 | orchestrator | Monday 23 March 2026 00:38:42 +0000 (0:00:02.528) 0:00:04.606 ********** 2026-03-23 00:39:01.589543 | orchestrator | changed: [testbed-manager] 2026-03-23 00:39:01.589554 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:39:01.589564 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:39:01.589574 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:39:01.589584 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:39:01.589593 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:39:01.589603 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:39:01.589613 | orchestrator | 2026-03-23 00:39:01.589639 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-23 00:39:01.589651 | orchestrator | Monday 23 March 2026 00:38:43 +0000 (0:00:00.958) 0:00:05.565 ********** 2026-03-23 00:39:01.589663 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:39:01.589675 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:39:01.589687 | orchestrator | ok: [testbed-manager] 2026-03-23 00:39:01.589698 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:39:01.589709 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:39:01.589721 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:39:01.589731 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:39:01.589743 | orchestrator | 2026-03-23 00:39:01.589754 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-23 00:39:01.589765 | orchestrator | Monday 23 March 2026 00:38:44 +0000 (0:00:01.317) 0:00:06.883 ********** 2026-03-23 00:39:01.589777 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:39:01.589788 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:39:01.589826 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:39:01.589839 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:39:01.589850 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:39:01.589861 | orchestrator | changed: [testbed-manager] 2026-03-23 00:39:01.589873 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:39:01.589885 | orchestrator | 2026-03-23 00:39:01.589896 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-23 00:39:01.589908 | orchestrator | Monday 23 March 2026 00:38:45 +0000 (0:00:00.612) 0:00:07.495 ********** 2026-03-23 00:39:01.589919 | orchestrator | changed: [testbed-manager] 2026-03-23 00:39:01.589931 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:39:01.589950 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:39:01.589962 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:39:01.589974 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:39:01.589986 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:39:01.589997 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:39:01.590006 | orchestrator | 2026-03-23 00:39:01.590079 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-23 00:39:01.590090 | orchestrator | Monday 23 March 2026 00:38:58 +0000 (0:00:13.509) 0:00:21.004 ********** 2026-03-23 00:39:01.590101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:39:01.590111 | orchestrator | 2026-03-23 00:39:01.590121 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-23 00:39:01.590131 | orchestrator | Monday 23 March 2026 00:38:59 +0000 (0:00:01.046) 0:00:22.051 ********** 2026-03-23 00:39:01.590141 | orchestrator | changed: [testbed-manager] 2026-03-23 00:39:01.590170 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:39:01.590180 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:39:01.590190 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:39:01.590200 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:39:01.590209 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:39:01.590219 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:39:01.590229 | orchestrator | 2026-03-23 00:39:01.590238 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:39:01.590249 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:39:01.590278 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:39:01.590288 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:39:01.590298 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:39:01.590308 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:39:01.590318 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:39:01.590328 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:39:01.590337 | orchestrator | 2026-03-23 00:39:01.590347 | orchestrator | 2026-03-23 00:39:01.590357 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:39:01.590366 | orchestrator | Monday 23 March 2026 00:39:01 +0000 (0:00:01.747) 0:00:23.799 ********** 2026-03-23 00:39:01.590376 | orchestrator | =============================================================================== 2026-03-23 00:39:01.590386 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.51s 2026-03-23 00:39:01.590406 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.53s 2026-03-23 00:39:01.590429 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.75s 2026-03-23 00:39:01.590449 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.32s 2026-03-23 00:39:01.590459 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.16s 2026-03-23 00:39:01.590469 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.05s 2026-03-23 00:39:01.590505 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.96s 2026-03-23 00:39:01.590521 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.61s 2026-03-23 00:39:01.590531 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.59s 2026-03-23 00:39:01.705354 | orchestrator | ++ semver latest 7.1.1 2026-03-23 00:39:01.746398 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-23 00:39:01.746485 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-23 00:39:01.746501 | orchestrator | + sudo systemctl restart manager.service 2026-03-23 00:39:14.848667 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-23 00:39:14.848925 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-23 00:39:14.848959 | orchestrator | + local max_attempts=60 2026-03-23 00:39:14.848982 | orchestrator | + local name=ceph-ansible 2026-03-23 00:39:14.849002 | orchestrator | + local attempt_num=1 2026-03-23 00:39:14.849023 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:39:14.884845 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-23 00:39:14.884945 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-23 00:39:14.884961 | orchestrator | + sleep 5 2026-03-23 00:39:19.887159 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:39:19.919561 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-23 00:39:19.919655 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-23 00:39:19.919670 | orchestrator | + sleep 5 2026-03-23 00:39:24.922976 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:39:24.961222 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-23 00:39:24.961315 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-23 00:39:24.961330 | orchestrator | + sleep 5 2026-03-23 00:39:29.966383 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:39:30.002188 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-23 00:39:30.002284 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-23 00:39:30.002299 | orchestrator | + sleep 5 2026-03-23 00:39:35.006127 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:39:35.044609 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-23 00:39:35.044689 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-23 00:39:35.044700 | orchestrator | + sleep 5 2026-03-23 00:39:40.049568 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:39:40.080411 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-23 00:39:40.080506 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-23 00:39:40.080525 | orchestrator | + sleep 5 2026-03-23 00:39:45.085356 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:39:45.123209 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-23 00:39:45.123302 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-23 00:39:45.123317 | orchestrator | + sleep 5 2026-03-23 00:39:50.130438 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:39:50.163127 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-23 00:39:50.163250 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-23 00:39:50.163269 | orchestrator | + sleep 5 2026-03-23 00:39:55.166433 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:39:55.206943 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-23 00:39:55.207043 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-23 00:39:55.207061 | orchestrator | + sleep 5 2026-03-23 00:40:00.210422 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:40:00.253474 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-23 00:40:00.253566 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-23 00:40:00.253583 | orchestrator | + sleep 5 2026-03-23 00:40:05.258320 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:40:05.299257 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-23 00:40:05.299333 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-23 00:40:05.299343 | orchestrator | + sleep 5 2026-03-23 00:40:10.303400 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:40:10.337455 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-23 00:40:10.337550 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-23 00:40:10.337599 | orchestrator | + sleep 5 2026-03-23 00:40:15.342813 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:40:15.383861 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-23 00:40:15.383990 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-23 00:40:15.384019 | orchestrator | + sleep 5 2026-03-23 00:40:20.387979 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-23 00:40:20.421399 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-23 00:40:20.421506 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-23 00:40:20.421522 | orchestrator | + local max_attempts=60 2026-03-23 00:40:20.421535 | orchestrator | + local name=kolla-ansible 2026-03-23 00:40:20.421547 | orchestrator | + local attempt_num=1 2026-03-23 00:40:20.421923 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-23 00:40:20.447808 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-23 00:40:20.447897 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-23 00:40:20.447912 | orchestrator | + local max_attempts=60 2026-03-23 00:40:20.447925 | orchestrator | + local name=osism-ansible 2026-03-23 00:40:20.447937 | orchestrator | + local attempt_num=1 2026-03-23 00:40:20.448523 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-23 00:40:20.481249 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-23 00:40:20.481350 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-23 00:40:20.481366 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-23 00:40:20.614396 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-23 00:40:20.766006 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-23 00:40:20.889165 | orchestrator | ARA in osism-ansible already disabled. 2026-03-23 00:40:21.011914 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-23 00:40:21.013235 | orchestrator | + osism apply gather-facts 2026-03-23 00:40:32.103384 | orchestrator | 2026-03-23 00:40:32 | INFO  | Prepare task for execution of gather-facts. 2026-03-23 00:40:32.174589 | orchestrator | 2026-03-23 00:40:32 | INFO  | Task bfd04cc0-2341-4814-9845-0a2f14d41479 (gather-facts) was prepared for execution. 2026-03-23 00:40:32.174714 | orchestrator | 2026-03-23 00:40:32 | INFO  | It takes a moment until task bfd04cc0-2341-4814-9845-0a2f14d41479 (gather-facts) has been started and output is visible here. 2026-03-23 00:40:44.490410 | orchestrator | 2026-03-23 00:40:44.490552 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-23 00:40:44.490578 | orchestrator | 2026-03-23 00:40:44.490597 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-23 00:40:44.490616 | orchestrator | Monday 23 March 2026 00:40:35 +0000 (0:00:00.281) 0:00:00.281 ********** 2026-03-23 00:40:44.490635 | orchestrator | ok: [testbed-manager] 2026-03-23 00:40:44.490656 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:40:44.490675 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:40:44.490693 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:40:44.490711 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:40:44.490760 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:40:44.490778 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:40:44.490798 | orchestrator | 2026-03-23 00:40:44.490816 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-23 00:40:44.490835 | orchestrator | 2026-03-23 00:40:44.490854 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-23 00:40:44.490872 | orchestrator | Monday 23 March 2026 00:40:43 +0000 (0:00:08.542) 0:00:08.824 ********** 2026-03-23 00:40:44.490891 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:40:44.490910 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:40:44.490928 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:40:44.490948 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:40:44.490967 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:40:44.490985 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:40:44.491004 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:40:44.491023 | orchestrator | 2026-03-23 00:40:44.491071 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:40:44.491092 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:40:44.491146 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:40:44.491165 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:40:44.491182 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:40:44.491201 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:40:44.491218 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:40:44.491236 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 00:40:44.491254 | orchestrator | 2026-03-23 00:40:44.491272 | orchestrator | 2026-03-23 00:40:44.491290 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:40:44.491307 | orchestrator | Monday 23 March 2026 00:40:44 +0000 (0:00:00.560) 0:00:09.384 ********** 2026-03-23 00:40:44.491324 | orchestrator | =============================================================================== 2026-03-23 00:40:44.491340 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.54s 2026-03-23 00:40:44.491357 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-03-23 00:40:44.604908 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-23 00:40:44.614713 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-23 00:40:44.628474 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-23 00:40:44.636657 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-23 00:40:44.651639 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-23 00:40:44.659512 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-23 00:40:44.671007 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-23 00:40:44.680833 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-23 00:40:44.691307 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-23 00:40:44.706417 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-23 00:40:44.717042 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-23 00:40:44.725768 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-23 00:40:44.735057 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-23 00:40:44.744004 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-23 00:40:44.759636 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-23 00:40:44.768981 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-23 00:40:44.782482 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-23 00:40:44.792821 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-23 00:40:44.804808 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-23 00:40:44.817608 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-23 00:40:44.834819 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-23 00:40:44.847134 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-23 00:40:44.865216 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-23 00:40:44.879041 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-23 00:40:45.374727 | orchestrator | ok: Runtime: 0:23:14.907327 2026-03-23 00:40:45.503820 | 2026-03-23 00:40:45.504046 | TASK [Deploy services] 2026-03-23 00:40:46.040787 | orchestrator | skipping: Conditional result was False 2026-03-23 00:40:46.061595 | 2026-03-23 00:40:46.061773 | TASK [Deploy in a nutshell] 2026-03-23 00:40:46.780573 | orchestrator | + set -e 2026-03-23 00:40:46.780806 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-23 00:40:46.780880 | orchestrator | ++ export INTERACTIVE=false 2026-03-23 00:40:46.780908 | orchestrator | ++ INTERACTIVE=false 2026-03-23 00:40:46.780922 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-23 00:40:46.780935 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-23 00:40:46.780964 | orchestrator | + source /opt/manager-vars.sh 2026-03-23 00:40:46.781009 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-23 00:40:46.781039 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-23 00:40:46.781053 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-23 00:40:46.781069 | orchestrator | ++ CEPH_VERSION=reef 2026-03-23 00:40:46.781082 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-23 00:40:46.781100 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-23 00:40:46.781112 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-23 00:40:46.781132 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-23 00:40:46.781143 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-23 00:40:46.781163 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-23 00:40:46.781174 | orchestrator | ++ export ARA=false 2026-03-23 00:40:46.781186 | orchestrator | ++ ARA=false 2026-03-23 00:40:46.781197 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-23 00:40:46.781209 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-23 00:40:46.781220 | orchestrator | ++ export TEMPEST=true 2026-03-23 00:40:46.781236 | orchestrator | ++ TEMPEST=true 2026-03-23 00:40:46.781247 | orchestrator | ++ export IS_ZUUL=true 2026-03-23 00:40:46.781258 | orchestrator | ++ IS_ZUUL=true 2026-03-23 00:40:46.781269 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.169 2026-03-23 00:40:46.781281 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.169 2026-03-23 00:40:46.781292 | orchestrator | ++ export EXTERNAL_API=false 2026-03-23 00:40:46.781303 | orchestrator | ++ EXTERNAL_API=false 2026-03-23 00:40:46.781322 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-23 00:40:46.781340 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-23 00:40:46.781359 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-23 00:40:46.781387 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-23 00:40:46.781414 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-23 00:40:46.781434 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-23 00:40:46.781525 | orchestrator | + echo 2026-03-23 00:40:46.781546 | orchestrator | 2026-03-23 00:40:46.781558 | orchestrator | # PULL IMAGES 2026-03-23 00:40:46.781569 | orchestrator | 2026-03-23 00:40:46.781580 | orchestrator | + echo '# PULL IMAGES' 2026-03-23 00:40:46.781591 | orchestrator | + echo 2026-03-23 00:40:46.782866 | orchestrator | ++ semver latest 7.0.0 2026-03-23 00:40:46.835431 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-23 00:40:46.835531 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-23 00:40:46.835566 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-23 00:40:47.931863 | orchestrator | 2026-03-23 00:40:47 | INFO  | Trying to run play pull-images in environment custom 2026-03-23 00:40:58.011485 | orchestrator | 2026-03-23 00:40:58 | INFO  | Prepare task for execution of pull-images. 2026-03-23 00:40:58.082148 | orchestrator | 2026-03-23 00:40:58 | INFO  | Task 2a5afe04-66f9-4514-8a47-5ebb74667386 (pull-images) was prepared for execution. 2026-03-23 00:40:58.082261 | orchestrator | 2026-03-23 00:40:58 | INFO  | Task 2a5afe04-66f9-4514-8a47-5ebb74667386 is running in background. No more output. Check ARA for logs. 2026-03-23 00:40:59.396561 | orchestrator | 2026-03-23 00:40:59 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-23 00:41:09.449165 | orchestrator | 2026-03-23 00:41:09 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-23 00:41:09.527623 | orchestrator | 2026-03-23 00:41:09 | INFO  | Task ba0c5aef-ecd8-40aa-a4bb-b45bf5ebbf69 (wipe-partitions) was prepared for execution. 2026-03-23 00:41:09.527751 | orchestrator | 2026-03-23 00:41:09 | INFO  | It takes a moment until task ba0c5aef-ecd8-40aa-a4bb-b45bf5ebbf69 (wipe-partitions) has been started and output is visible here. 2026-03-23 00:41:21.349052 | orchestrator | 2026-03-23 00:41:21.349165 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-23 00:41:21.349182 | orchestrator | 2026-03-23 00:41:21.349194 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-23 00:41:21.349214 | orchestrator | Monday 23 March 2026 00:41:12 +0000 (0:00:00.150) 0:00:00.150 ********** 2026-03-23 00:41:21.349264 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:41:21.349286 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:41:21.349305 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:41:21.349323 | orchestrator | 2026-03-23 00:41:21.349340 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-23 00:41:21.349357 | orchestrator | Monday 23 March 2026 00:41:13 +0000 (0:00:00.935) 0:00:01.085 ********** 2026-03-23 00:41:21.349380 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:21.349398 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:41:21.349418 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:41:21.349438 | orchestrator | 2026-03-23 00:41:21.349456 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-23 00:41:21.349474 | orchestrator | Monday 23 March 2026 00:41:13 +0000 (0:00:00.236) 0:00:01.322 ********** 2026-03-23 00:41:21.349492 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:41:21.349511 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:41:21.349524 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:41:21.349535 | orchestrator | 2026-03-23 00:41:21.349546 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-23 00:41:21.349557 | orchestrator | Monday 23 March 2026 00:41:14 +0000 (0:00:00.559) 0:00:01.881 ********** 2026-03-23 00:41:21.349570 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:21.349582 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:41:21.349594 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:41:21.349606 | orchestrator | 2026-03-23 00:41:21.349618 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-23 00:41:21.349630 | orchestrator | Monday 23 March 2026 00:41:14 +0000 (0:00:00.252) 0:00:02.134 ********** 2026-03-23 00:41:21.349642 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-23 00:41:21.349660 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-23 00:41:21.349675 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-23 00:41:21.349687 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-23 00:41:21.349699 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-23 00:41:21.349747 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-23 00:41:21.349760 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-23 00:41:21.349772 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-23 00:41:21.349784 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-23 00:41:21.349797 | orchestrator | 2026-03-23 00:41:21.349810 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-23 00:41:21.349822 | orchestrator | Monday 23 March 2026 00:41:16 +0000 (0:00:01.379) 0:00:03.513 ********** 2026-03-23 00:41:21.349835 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-23 00:41:21.349847 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-23 00:41:21.349859 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-23 00:41:21.349871 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-23 00:41:21.349883 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-23 00:41:21.349895 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-23 00:41:21.349906 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-23 00:41:21.349919 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-23 00:41:21.349932 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-23 00:41:21.349945 | orchestrator | 2026-03-23 00:41:21.349963 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-23 00:41:21.349975 | orchestrator | Monday 23 March 2026 00:41:17 +0000 (0:00:01.422) 0:00:04.935 ********** 2026-03-23 00:41:21.349985 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-23 00:41:21.349996 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-23 00:41:21.350007 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-23 00:41:21.350074 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-23 00:41:21.350097 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-23 00:41:21.350108 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-23 00:41:21.350119 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-23 00:41:21.350130 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-23 00:41:21.350141 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-23 00:41:21.350151 | orchestrator | 2026-03-23 00:41:21.350163 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-23 00:41:21.350174 | orchestrator | Monday 23 March 2026 00:41:19 +0000 (0:00:02.173) 0:00:07.109 ********** 2026-03-23 00:41:21.350185 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:41:21.350196 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:41:21.350206 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:41:21.350217 | orchestrator | 2026-03-23 00:41:21.350228 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-23 00:41:21.350239 | orchestrator | Monday 23 March 2026 00:41:20 +0000 (0:00:00.574) 0:00:07.683 ********** 2026-03-23 00:41:21.350250 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:41:21.350260 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:41:21.350271 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:41:21.350282 | orchestrator | 2026-03-23 00:41:21.350293 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:41:21.350306 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:41:21.350319 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:41:21.350351 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:41:21.350363 | orchestrator | 2026-03-23 00:41:21.350374 | orchestrator | 2026-03-23 00:41:21.350384 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:41:21.350395 | orchestrator | Monday 23 March 2026 00:41:21 +0000 (0:00:00.793) 0:00:08.478 ********** 2026-03-23 00:41:21.350406 | orchestrator | =============================================================================== 2026-03-23 00:41:21.350417 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.17s 2026-03-23 00:41:21.350428 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.42s 2026-03-23 00:41:21.350439 | orchestrator | Check device availability ----------------------------------------------- 1.38s 2026-03-23 00:41:21.350450 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.94s 2026-03-23 00:41:21.350461 | orchestrator | Request device events from the kernel ----------------------------------- 0.80s 2026-03-23 00:41:21.350472 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2026-03-23 00:41:21.350482 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.56s 2026-03-23 00:41:21.350493 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2026-03-23 00:41:21.350504 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2026-03-23 00:41:32.836579 | orchestrator | 2026-03-23 00:41:32 | INFO  | Prepare task for execution of facts. 2026-03-23 00:41:32.911464 | orchestrator | 2026-03-23 00:41:32 | INFO  | Task f3926419-55a7-467d-8c49-3779a1141928 (facts) was prepared for execution. 2026-03-23 00:41:32.911611 | orchestrator | 2026-03-23 00:41:32 | INFO  | It takes a moment until task f3926419-55a7-467d-8c49-3779a1141928 (facts) has been started and output is visible here. 2026-03-23 00:41:44.538849 | orchestrator | 2026-03-23 00:41:44.538976 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-23 00:41:44.538998 | orchestrator | 2026-03-23 00:41:44.539053 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-23 00:41:44.539069 | orchestrator | Monday 23 March 2026 00:41:36 +0000 (0:00:00.342) 0:00:00.342 ********** 2026-03-23 00:41:44.539092 | orchestrator | ok: [testbed-manager] 2026-03-23 00:41:44.539109 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:41:44.539126 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:41:44.539141 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:41:44.539156 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:41:44.539176 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:41:44.539192 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:41:44.539207 | orchestrator | 2026-03-23 00:41:44.539234 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-23 00:41:44.539250 | orchestrator | Monday 23 March 2026 00:41:37 +0000 (0:00:01.346) 0:00:01.689 ********** 2026-03-23 00:41:44.539266 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:41:44.539283 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:41:44.539301 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:41:44.539316 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:41:44.539333 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:44.539349 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:41:44.539365 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:41:44.539397 | orchestrator | 2026-03-23 00:41:44.539415 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-23 00:41:44.539478 | orchestrator | 2026-03-23 00:41:44.539502 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-23 00:41:44.539526 | orchestrator | Monday 23 March 2026 00:41:38 +0000 (0:00:01.196) 0:00:02.886 ********** 2026-03-23 00:41:44.539550 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:41:44.539582 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:41:44.539602 | orchestrator | ok: [testbed-manager] 2026-03-23 00:41:44.539620 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:41:44.539637 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:41:44.539658 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:41:44.539684 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:41:44.539735 | orchestrator | 2026-03-23 00:41:44.539753 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-23 00:41:44.539774 | orchestrator | 2026-03-23 00:41:44.539792 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-23 00:41:44.539812 | orchestrator | Monday 23 March 2026 00:41:43 +0000 (0:00:05.062) 0:00:07.948 ********** 2026-03-23 00:41:44.539831 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:41:44.539847 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:41:44.539863 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:41:44.539878 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:41:44.539894 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:44.539909 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:41:44.539923 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:41:44.539939 | orchestrator | 2026-03-23 00:41:44.539957 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:41:44.539976 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:41:44.539995 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:41:44.540013 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:41:44.540033 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:41:44.540052 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:41:44.540088 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:41:44.540107 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:41:44.540126 | orchestrator | 2026-03-23 00:41:44.540145 | orchestrator | 2026-03-23 00:41:44.540163 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:41:44.540181 | orchestrator | Monday 23 March 2026 00:41:44 +0000 (0:00:00.446) 0:00:08.395 ********** 2026-03-23 00:41:44.540194 | orchestrator | =============================================================================== 2026-03-23 00:41:44.540204 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.06s 2026-03-23 00:41:44.540216 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.35s 2026-03-23 00:41:44.540226 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2026-03-23 00:41:44.540237 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-03-23 00:41:45.838743 | orchestrator | 2026-03-23 00:41:45 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-23 00:41:45.894767 | orchestrator | 2026-03-23 00:41:45 | INFO  | Task 8e6e9379-da82-491d-87b6-958ae45d97cd (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-23 00:41:45.894856 | orchestrator | 2026-03-23 00:41:45 | INFO  | It takes a moment until task 8e6e9379-da82-491d-87b6-958ae45d97cd (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-23 00:41:57.183540 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-23 00:41:57.183629 | orchestrator | 2.16.14 2026-03-23 00:41:57.183640 | orchestrator | 2026-03-23 00:41:57.183648 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-23 00:41:57.183657 | orchestrator | 2026-03-23 00:41:57.183664 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-23 00:41:57.183672 | orchestrator | Monday 23 March 2026 00:41:50 +0000 (0:00:00.325) 0:00:00.325 ********** 2026-03-23 00:41:57.183679 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-23 00:41:57.183705 | orchestrator | 2026-03-23 00:41:57.183713 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-23 00:41:57.183720 | orchestrator | Monday 23 March 2026 00:41:50 +0000 (0:00:00.225) 0:00:00.550 ********** 2026-03-23 00:41:57.183728 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:41:57.183736 | orchestrator | 2026-03-23 00:41:57.183743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.183750 | orchestrator | Monday 23 March 2026 00:41:50 +0000 (0:00:00.211) 0:00:00.762 ********** 2026-03-23 00:41:57.183766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-23 00:41:57.183774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-23 00:41:57.183781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-23 00:41:57.183788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-23 00:41:57.183796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-23 00:41:57.183803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-23 00:41:57.183810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-23 00:41:57.183817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-23 00:41:57.183824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-23 00:41:57.183831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-23 00:41:57.183857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-23 00:41:57.183865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-23 00:41:57.183872 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-23 00:41:57.183879 | orchestrator | 2026-03-23 00:41:57.183886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.183893 | orchestrator | Monday 23 March 2026 00:41:50 +0000 (0:00:00.351) 0:00:01.114 ********** 2026-03-23 00:41:57.183900 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.183907 | orchestrator | 2026-03-23 00:41:57.183914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.183921 | orchestrator | Monday 23 March 2026 00:41:51 +0000 (0:00:00.427) 0:00:01.541 ********** 2026-03-23 00:41:57.183928 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.183935 | orchestrator | 2026-03-23 00:41:57.183942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.183953 | orchestrator | Monday 23 March 2026 00:41:51 +0000 (0:00:00.193) 0:00:01.734 ********** 2026-03-23 00:41:57.183961 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.183968 | orchestrator | 2026-03-23 00:41:57.183975 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.183982 | orchestrator | Monday 23 March 2026 00:41:51 +0000 (0:00:00.202) 0:00:01.937 ********** 2026-03-23 00:41:57.183990 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.183997 | orchestrator | 2026-03-23 00:41:57.184004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.184011 | orchestrator | Monday 23 March 2026 00:41:51 +0000 (0:00:00.187) 0:00:02.124 ********** 2026-03-23 00:41:57.184018 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.184025 | orchestrator | 2026-03-23 00:41:57.184032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.184040 | orchestrator | Monday 23 March 2026 00:41:52 +0000 (0:00:00.193) 0:00:02.317 ********** 2026-03-23 00:41:57.184047 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.184054 | orchestrator | 2026-03-23 00:41:57.184062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.184070 | orchestrator | Monday 23 March 2026 00:41:52 +0000 (0:00:00.184) 0:00:02.502 ********** 2026-03-23 00:41:57.184078 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.184086 | orchestrator | 2026-03-23 00:41:57.184094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.184102 | orchestrator | Monday 23 March 2026 00:41:52 +0000 (0:00:00.191) 0:00:02.693 ********** 2026-03-23 00:41:57.184110 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.184119 | orchestrator | 2026-03-23 00:41:57.184127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.184135 | orchestrator | Monday 23 March 2026 00:41:52 +0000 (0:00:00.197) 0:00:02.891 ********** 2026-03-23 00:41:57.184144 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15) 2026-03-23 00:41:57.184152 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15) 2026-03-23 00:41:57.184159 | orchestrator | 2026-03-23 00:41:57.184167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.184187 | orchestrator | Monday 23 March 2026 00:41:53 +0000 (0:00:00.395) 0:00:03.286 ********** 2026-03-23 00:41:57.184194 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1d2a1acf-b303-4df2-8937-2ee8f9bbf12f) 2026-03-23 00:41:57.184202 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1d2a1acf-b303-4df2-8937-2ee8f9bbf12f) 2026-03-23 00:41:57.184209 | orchestrator | 2026-03-23 00:41:57.184220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.184234 | orchestrator | Monday 23 March 2026 00:41:53 +0000 (0:00:00.402) 0:00:03.688 ********** 2026-03-23 00:41:57.184241 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c3b20d12-9473-438c-9aa2-c72737b9e6d0) 2026-03-23 00:41:57.184248 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c3b20d12-9473-438c-9aa2-c72737b9e6d0) 2026-03-23 00:41:57.184255 | orchestrator | 2026-03-23 00:41:57.184262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.184270 | orchestrator | Monday 23 March 2026 00:41:54 +0000 (0:00:00.589) 0:00:04.278 ********** 2026-03-23 00:41:57.184277 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6d03a194-715d-49d1-b802-c824960a80c4) 2026-03-23 00:41:57.184284 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6d03a194-715d-49d1-b802-c824960a80c4) 2026-03-23 00:41:57.184291 | orchestrator | 2026-03-23 00:41:57.184298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:41:57.184305 | orchestrator | Monday 23 March 2026 00:41:54 +0000 (0:00:00.584) 0:00:04.863 ********** 2026-03-23 00:41:57.184313 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-23 00:41:57.184320 | orchestrator | 2026-03-23 00:41:57.184327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:41:57.184334 | orchestrator | Monday 23 March 2026 00:41:55 +0000 (0:00:00.718) 0:00:05.582 ********** 2026-03-23 00:41:57.184341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-23 00:41:57.184348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-23 00:41:57.184356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-23 00:41:57.184363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-23 00:41:57.184370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-23 00:41:57.184377 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-23 00:41:57.184384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-23 00:41:57.184391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-23 00:41:57.184399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-23 00:41:57.184406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-23 00:41:57.184413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-23 00:41:57.184420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-23 00:41:57.184427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-23 00:41:57.184434 | orchestrator | 2026-03-23 00:41:57.184442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:41:57.184449 | orchestrator | Monday 23 March 2026 00:41:55 +0000 (0:00:00.374) 0:00:05.956 ********** 2026-03-23 00:41:57.184456 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.184463 | orchestrator | 2026-03-23 00:41:57.184470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:41:57.184477 | orchestrator | Monday 23 March 2026 00:41:55 +0000 (0:00:00.205) 0:00:06.162 ********** 2026-03-23 00:41:57.184484 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.184491 | orchestrator | 2026-03-23 00:41:57.184498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:41:57.184506 | orchestrator | Monday 23 March 2026 00:41:56 +0000 (0:00:00.193) 0:00:06.356 ********** 2026-03-23 00:41:57.184513 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.184525 | orchestrator | 2026-03-23 00:41:57.184532 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:41:57.184539 | orchestrator | Monday 23 March 2026 00:41:56 +0000 (0:00:00.198) 0:00:06.554 ********** 2026-03-23 00:41:57.184546 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.184553 | orchestrator | 2026-03-23 00:41:57.184560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:41:57.184568 | orchestrator | Monday 23 March 2026 00:41:56 +0000 (0:00:00.206) 0:00:06.761 ********** 2026-03-23 00:41:57.184575 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.184582 | orchestrator | 2026-03-23 00:41:57.184589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:41:57.184596 | orchestrator | Monday 23 March 2026 00:41:56 +0000 (0:00:00.213) 0:00:06.974 ********** 2026-03-23 00:41:57.184603 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.184610 | orchestrator | 2026-03-23 00:41:57.184617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:41:57.184624 | orchestrator | Monday 23 March 2026 00:41:56 +0000 (0:00:00.187) 0:00:07.162 ********** 2026-03-23 00:41:57.184632 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:41:57.184639 | orchestrator | 2026-03-23 00:41:57.184650 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:03.929897 | orchestrator | Monday 23 March 2026 00:41:57 +0000 (0:00:00.185) 0:00:07.348 ********** 2026-03-23 00:42:03.930125 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.930158 | orchestrator | 2026-03-23 00:42:03.930178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:03.930196 | orchestrator | Monday 23 March 2026 00:41:57 +0000 (0:00:00.183) 0:00:07.531 ********** 2026-03-23 00:42:03.930208 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-23 00:42:03.930220 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-23 00:42:03.930231 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-23 00:42:03.930242 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-23 00:42:03.930253 | orchestrator | 2026-03-23 00:42:03.930265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:03.930294 | orchestrator | Monday 23 March 2026 00:41:58 +0000 (0:00:00.932) 0:00:08.464 ********** 2026-03-23 00:42:03.930306 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.930317 | orchestrator | 2026-03-23 00:42:03.930328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:03.930339 | orchestrator | Monday 23 March 2026 00:41:58 +0000 (0:00:00.209) 0:00:08.673 ********** 2026-03-23 00:42:03.930350 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.930361 | orchestrator | 2026-03-23 00:42:03.930372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:03.930383 | orchestrator | Monday 23 March 2026 00:41:58 +0000 (0:00:00.175) 0:00:08.849 ********** 2026-03-23 00:42:03.930394 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.930404 | orchestrator | 2026-03-23 00:42:03.930417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:03.930430 | orchestrator | Monday 23 March 2026 00:41:58 +0000 (0:00:00.195) 0:00:09.044 ********** 2026-03-23 00:42:03.930442 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.930455 | orchestrator | 2026-03-23 00:42:03.930467 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-23 00:42:03.930480 | orchestrator | Monday 23 March 2026 00:41:59 +0000 (0:00:00.179) 0:00:09.224 ********** 2026-03-23 00:42:03.930492 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-23 00:42:03.930504 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-23 00:42:03.930516 | orchestrator | 2026-03-23 00:42:03.930529 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-23 00:42:03.930541 | orchestrator | Monday 23 March 2026 00:41:59 +0000 (0:00:00.145) 0:00:09.369 ********** 2026-03-23 00:42:03.930582 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.930595 | orchestrator | 2026-03-23 00:42:03.930608 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-23 00:42:03.930620 | orchestrator | Monday 23 March 2026 00:41:59 +0000 (0:00:00.128) 0:00:09.498 ********** 2026-03-23 00:42:03.930633 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.930645 | orchestrator | 2026-03-23 00:42:03.930658 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-23 00:42:03.930670 | orchestrator | Monday 23 March 2026 00:41:59 +0000 (0:00:00.126) 0:00:09.625 ********** 2026-03-23 00:42:03.930716 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.930729 | orchestrator | 2026-03-23 00:42:03.930742 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-23 00:42:03.930755 | orchestrator | Monday 23 March 2026 00:41:59 +0000 (0:00:00.118) 0:00:09.743 ********** 2026-03-23 00:42:03.930768 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:42:03.930780 | orchestrator | 2026-03-23 00:42:03.930790 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-23 00:42:03.930801 | orchestrator | Monday 23 March 2026 00:41:59 +0000 (0:00:00.110) 0:00:09.854 ********** 2026-03-23 00:42:03.930813 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e8fe5fb-1ce5-58e9-8668-0121db885e3a'}}) 2026-03-23 00:42:03.930825 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '64892dc7-40b9-50f4-a971-7ffdf1a56e40'}}) 2026-03-23 00:42:03.930836 | orchestrator | 2026-03-23 00:42:03.930846 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-23 00:42:03.930857 | orchestrator | Monday 23 March 2026 00:41:59 +0000 (0:00:00.138) 0:00:09.992 ********** 2026-03-23 00:42:03.930869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e8fe5fb-1ce5-58e9-8668-0121db885e3a'}})  2026-03-23 00:42:03.930887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '64892dc7-40b9-50f4-a971-7ffdf1a56e40'}})  2026-03-23 00:42:03.930908 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.930927 | orchestrator | 2026-03-23 00:42:03.930946 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-23 00:42:03.930964 | orchestrator | Monday 23 March 2026 00:41:59 +0000 (0:00:00.143) 0:00:10.136 ********** 2026-03-23 00:42:03.930982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e8fe5fb-1ce5-58e9-8668-0121db885e3a'}})  2026-03-23 00:42:03.931000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '64892dc7-40b9-50f4-a971-7ffdf1a56e40'}})  2026-03-23 00:42:03.931020 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.931037 | orchestrator | 2026-03-23 00:42:03.931056 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-23 00:42:03.931076 | orchestrator | Monday 23 March 2026 00:42:00 +0000 (0:00:00.261) 0:00:10.397 ********** 2026-03-23 00:42:03.931095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e8fe5fb-1ce5-58e9-8668-0121db885e3a'}})  2026-03-23 00:42:03.931139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '64892dc7-40b9-50f4-a971-7ffdf1a56e40'}})  2026-03-23 00:42:03.931152 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.931163 | orchestrator | 2026-03-23 00:42:03.931174 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-23 00:42:03.931184 | orchestrator | Monday 23 March 2026 00:42:00 +0000 (0:00:00.130) 0:00:10.527 ********** 2026-03-23 00:42:03.931195 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:42:03.931206 | orchestrator | 2026-03-23 00:42:03.931217 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-23 00:42:03.931227 | orchestrator | Monday 23 March 2026 00:42:00 +0000 (0:00:00.115) 0:00:10.643 ********** 2026-03-23 00:42:03.931238 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:42:03.931260 | orchestrator | 2026-03-23 00:42:03.931271 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-23 00:42:03.931282 | orchestrator | Monday 23 March 2026 00:42:00 +0000 (0:00:00.134) 0:00:10.777 ********** 2026-03-23 00:42:03.931292 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.931304 | orchestrator | 2026-03-23 00:42:03.931315 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-23 00:42:03.931326 | orchestrator | Monday 23 March 2026 00:42:00 +0000 (0:00:00.129) 0:00:10.907 ********** 2026-03-23 00:42:03.931337 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.931348 | orchestrator | 2026-03-23 00:42:03.931358 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-23 00:42:03.931369 | orchestrator | Monday 23 March 2026 00:42:00 +0000 (0:00:00.124) 0:00:11.031 ********** 2026-03-23 00:42:03.931380 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.931390 | orchestrator | 2026-03-23 00:42:03.931401 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-23 00:42:03.931412 | orchestrator | Monday 23 March 2026 00:42:00 +0000 (0:00:00.113) 0:00:11.145 ********** 2026-03-23 00:42:03.931423 | orchestrator | ok: [testbed-node-3] => { 2026-03-23 00:42:03.931434 | orchestrator |  "ceph_osd_devices": { 2026-03-23 00:42:03.931445 | orchestrator |  "sdb": { 2026-03-23 00:42:03.931456 | orchestrator |  "osd_lvm_uuid": "4e8fe5fb-1ce5-58e9-8668-0121db885e3a" 2026-03-23 00:42:03.931466 | orchestrator |  }, 2026-03-23 00:42:03.931477 | orchestrator |  "sdc": { 2026-03-23 00:42:03.931488 | orchestrator |  "osd_lvm_uuid": "64892dc7-40b9-50f4-a971-7ffdf1a56e40" 2026-03-23 00:42:03.931499 | orchestrator |  } 2026-03-23 00:42:03.931510 | orchestrator |  } 2026-03-23 00:42:03.931520 | orchestrator | } 2026-03-23 00:42:03.931531 | orchestrator | 2026-03-23 00:42:03.931542 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-23 00:42:03.931553 | orchestrator | Monday 23 March 2026 00:42:01 +0000 (0:00:00.120) 0:00:11.265 ********** 2026-03-23 00:42:03.931564 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.931574 | orchestrator | 2026-03-23 00:42:03.931585 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-23 00:42:03.931596 | orchestrator | Monday 23 March 2026 00:42:01 +0000 (0:00:00.112) 0:00:11.378 ********** 2026-03-23 00:42:03.931609 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.931628 | orchestrator | 2026-03-23 00:42:03.931647 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-23 00:42:03.931664 | orchestrator | Monday 23 March 2026 00:42:01 +0000 (0:00:00.127) 0:00:11.506 ********** 2026-03-23 00:42:03.931705 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:42:03.931724 | orchestrator | 2026-03-23 00:42:03.931739 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-23 00:42:03.931757 | orchestrator | Monday 23 March 2026 00:42:01 +0000 (0:00:00.100) 0:00:11.606 ********** 2026-03-23 00:42:03.931776 | orchestrator | changed: [testbed-node-3] => { 2026-03-23 00:42:03.931796 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-23 00:42:03.931815 | orchestrator |  "ceph_osd_devices": { 2026-03-23 00:42:03.931833 | orchestrator |  "sdb": { 2026-03-23 00:42:03.931848 | orchestrator |  "osd_lvm_uuid": "4e8fe5fb-1ce5-58e9-8668-0121db885e3a" 2026-03-23 00:42:03.931860 | orchestrator |  }, 2026-03-23 00:42:03.931870 | orchestrator |  "sdc": { 2026-03-23 00:42:03.931881 | orchestrator |  "osd_lvm_uuid": "64892dc7-40b9-50f4-a971-7ffdf1a56e40" 2026-03-23 00:42:03.931891 | orchestrator |  } 2026-03-23 00:42:03.931902 | orchestrator |  }, 2026-03-23 00:42:03.931913 | orchestrator |  "lvm_volumes": [ 2026-03-23 00:42:03.931923 | orchestrator |  { 2026-03-23 00:42:03.931934 | orchestrator |  "data": "osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a", 2026-03-23 00:42:03.931945 | orchestrator |  "data_vg": "ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a" 2026-03-23 00:42:03.931966 | orchestrator |  }, 2026-03-23 00:42:03.931976 | orchestrator |  { 2026-03-23 00:42:03.931987 | orchestrator |  "data": "osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40", 2026-03-23 00:42:03.931998 | orchestrator |  "data_vg": "ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40" 2026-03-23 00:42:03.932009 | orchestrator |  } 2026-03-23 00:42:03.932020 | orchestrator |  ] 2026-03-23 00:42:03.932030 | orchestrator |  } 2026-03-23 00:42:03.932041 | orchestrator | } 2026-03-23 00:42:03.932052 | orchestrator | 2026-03-23 00:42:03.932062 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-23 00:42:03.932073 | orchestrator | Monday 23 March 2026 00:42:01 +0000 (0:00:00.190) 0:00:11.797 ********** 2026-03-23 00:42:03.932084 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-23 00:42:03.932094 | orchestrator | 2026-03-23 00:42:03.932105 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-23 00:42:03.932116 | orchestrator | 2026-03-23 00:42:03.932126 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-23 00:42:03.932137 | orchestrator | Monday 23 March 2026 00:42:03 +0000 (0:00:01.880) 0:00:13.678 ********** 2026-03-23 00:42:03.932147 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-23 00:42:03.932158 | orchestrator | 2026-03-23 00:42:03.932169 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-23 00:42:03.932180 | orchestrator | Monday 23 March 2026 00:42:03 +0000 (0:00:00.223) 0:00:13.901 ********** 2026-03-23 00:42:03.932190 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:42:03.932201 | orchestrator | 2026-03-23 00:42:03.932221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.831540 | orchestrator | Monday 23 March 2026 00:42:03 +0000 (0:00:00.192) 0:00:14.094 ********** 2026-03-23 00:42:10.831661 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-23 00:42:10.831789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-23 00:42:10.831806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-23 00:42:10.831817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-23 00:42:10.831828 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-23 00:42:10.831839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-23 00:42:10.831850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-23 00:42:10.831865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-23 00:42:10.831876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-23 00:42:10.831888 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-23 00:42:10.831898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-23 00:42:10.831909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-23 00:42:10.831941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-23 00:42:10.831952 | orchestrator | 2026-03-23 00:42:10.831964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.831975 | orchestrator | Monday 23 March 2026 00:42:04 +0000 (0:00:00.323) 0:00:14.417 ********** 2026-03-23 00:42:10.831986 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.831998 | orchestrator | 2026-03-23 00:42:10.832008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.832019 | orchestrator | Monday 23 March 2026 00:42:04 +0000 (0:00:00.201) 0:00:14.619 ********** 2026-03-23 00:42:10.832055 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.832066 | orchestrator | 2026-03-23 00:42:10.832078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.832091 | orchestrator | Monday 23 March 2026 00:42:04 +0000 (0:00:00.178) 0:00:14.798 ********** 2026-03-23 00:42:10.832103 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.832116 | orchestrator | 2026-03-23 00:42:10.832128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.832140 | orchestrator | Monday 23 March 2026 00:42:04 +0000 (0:00:00.169) 0:00:14.967 ********** 2026-03-23 00:42:10.832152 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.832164 | orchestrator | 2026-03-23 00:42:10.832181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.832200 | orchestrator | Monday 23 March 2026 00:42:04 +0000 (0:00:00.162) 0:00:15.130 ********** 2026-03-23 00:42:10.832213 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.832226 | orchestrator | 2026-03-23 00:42:10.832238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.832249 | orchestrator | Monday 23 March 2026 00:42:05 +0000 (0:00:00.471) 0:00:15.602 ********** 2026-03-23 00:42:10.832260 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.832271 | orchestrator | 2026-03-23 00:42:10.832285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.832301 | orchestrator | Monday 23 March 2026 00:42:05 +0000 (0:00:00.172) 0:00:15.774 ********** 2026-03-23 00:42:10.832312 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.832322 | orchestrator | 2026-03-23 00:42:10.832333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.832344 | orchestrator | Monday 23 March 2026 00:42:05 +0000 (0:00:00.185) 0:00:15.960 ********** 2026-03-23 00:42:10.832354 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.832365 | orchestrator | 2026-03-23 00:42:10.832375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.832386 | orchestrator | Monday 23 March 2026 00:42:05 +0000 (0:00:00.167) 0:00:16.127 ********** 2026-03-23 00:42:10.832397 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff) 2026-03-23 00:42:10.832408 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff) 2026-03-23 00:42:10.832419 | orchestrator | 2026-03-23 00:42:10.832430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.832440 | orchestrator | Monday 23 March 2026 00:42:06 +0000 (0:00:00.359) 0:00:16.487 ********** 2026-03-23 00:42:10.832451 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_77dd2124-92bc-4f46-82be-f9b228a0677e) 2026-03-23 00:42:10.832462 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_77dd2124-92bc-4f46-82be-f9b228a0677e) 2026-03-23 00:42:10.832472 | orchestrator | 2026-03-23 00:42:10.832483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.832494 | orchestrator | Monday 23 March 2026 00:42:06 +0000 (0:00:00.367) 0:00:16.854 ********** 2026-03-23 00:42:10.832504 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0331d52b-cef6-4339-b12c-c63469d626c6) 2026-03-23 00:42:10.832515 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0331d52b-cef6-4339-b12c-c63469d626c6) 2026-03-23 00:42:10.832526 | orchestrator | 2026-03-23 00:42:10.832537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.832567 | orchestrator | Monday 23 March 2026 00:42:07 +0000 (0:00:00.370) 0:00:17.224 ********** 2026-03-23 00:42:10.832579 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5) 2026-03-23 00:42:10.832589 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5) 2026-03-23 00:42:10.832600 | orchestrator | 2026-03-23 00:42:10.832618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:10.832629 | orchestrator | Monday 23 March 2026 00:42:07 +0000 (0:00:00.371) 0:00:17.596 ********** 2026-03-23 00:42:10.832640 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-23 00:42:10.832650 | orchestrator | 2026-03-23 00:42:10.832661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:10.832671 | orchestrator | Monday 23 March 2026 00:42:07 +0000 (0:00:00.291) 0:00:17.888 ********** 2026-03-23 00:42:10.832708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-23 00:42:10.832720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-23 00:42:10.832738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-23 00:42:10.832749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-23 00:42:10.832760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-23 00:42:10.832771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-23 00:42:10.832781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-23 00:42:10.832792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-23 00:42:10.832802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-23 00:42:10.832813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-23 00:42:10.832824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-23 00:42:10.832834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-23 00:42:10.832845 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-23 00:42:10.832856 | orchestrator | 2026-03-23 00:42:10.832866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:10.832877 | orchestrator | Monday 23 March 2026 00:42:08 +0000 (0:00:00.355) 0:00:18.244 ********** 2026-03-23 00:42:10.832888 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.832898 | orchestrator | 2026-03-23 00:42:10.832909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:10.832919 | orchestrator | Monday 23 March 2026 00:42:08 +0000 (0:00:00.186) 0:00:18.430 ********** 2026-03-23 00:42:10.832930 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.832940 | orchestrator | 2026-03-23 00:42:10.832951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:10.832962 | orchestrator | Monday 23 March 2026 00:42:08 +0000 (0:00:00.623) 0:00:19.053 ********** 2026-03-23 00:42:10.832972 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.832983 | orchestrator | 2026-03-23 00:42:10.832993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:10.833004 | orchestrator | Monday 23 March 2026 00:42:09 +0000 (0:00:00.211) 0:00:19.265 ********** 2026-03-23 00:42:10.833014 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.833025 | orchestrator | 2026-03-23 00:42:10.833035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:10.833046 | orchestrator | Monday 23 March 2026 00:42:09 +0000 (0:00:00.189) 0:00:19.454 ********** 2026-03-23 00:42:10.833057 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.833067 | orchestrator | 2026-03-23 00:42:10.833078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:10.833088 | orchestrator | Monday 23 March 2026 00:42:09 +0000 (0:00:00.193) 0:00:19.648 ********** 2026-03-23 00:42:10.833099 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.833116 | orchestrator | 2026-03-23 00:42:10.833127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:10.833138 | orchestrator | Monday 23 March 2026 00:42:09 +0000 (0:00:00.179) 0:00:19.828 ********** 2026-03-23 00:42:10.833148 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.833159 | orchestrator | 2026-03-23 00:42:10.833169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:10.833180 | orchestrator | Monday 23 March 2026 00:42:09 +0000 (0:00:00.188) 0:00:20.016 ********** 2026-03-23 00:42:10.833190 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:10.833201 | orchestrator | 2026-03-23 00:42:10.833211 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:10.833222 | orchestrator | Monday 23 March 2026 00:42:10 +0000 (0:00:00.188) 0:00:20.205 ********** 2026-03-23 00:42:10.833233 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-23 00:42:10.833244 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-23 00:42:10.833255 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-23 00:42:10.833265 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-23 00:42:10.833284 | orchestrator | 2026-03-23 00:42:10.833304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:10.833322 | orchestrator | Monday 23 March 2026 00:42:10 +0000 (0:00:00.699) 0:00:20.905 ********** 2026-03-23 00:42:10.833341 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.191194 | orchestrator | 2026-03-23 00:42:16.191283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:16.191297 | orchestrator | Monday 23 March 2026 00:42:10 +0000 (0:00:00.165) 0:00:21.070 ********** 2026-03-23 00:42:16.191305 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.191315 | orchestrator | 2026-03-23 00:42:16.191323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:16.191331 | orchestrator | Monday 23 March 2026 00:42:11 +0000 (0:00:00.184) 0:00:21.255 ********** 2026-03-23 00:42:16.191339 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.191347 | orchestrator | 2026-03-23 00:42:16.191355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:16.191362 | orchestrator | Monday 23 March 2026 00:42:11 +0000 (0:00:00.171) 0:00:21.426 ********** 2026-03-23 00:42:16.191370 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.191378 | orchestrator | 2026-03-23 00:42:16.191386 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-23 00:42:16.191398 | orchestrator | Monday 23 March 2026 00:42:11 +0000 (0:00:00.151) 0:00:21.578 ********** 2026-03-23 00:42:16.191412 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-23 00:42:16.191424 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-23 00:42:16.191436 | orchestrator | 2026-03-23 00:42:16.191449 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-23 00:42:16.191484 | orchestrator | Monday 23 March 2026 00:42:11 +0000 (0:00:00.271) 0:00:21.850 ********** 2026-03-23 00:42:16.191499 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.191512 | orchestrator | 2026-03-23 00:42:16.191522 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-23 00:42:16.191530 | orchestrator | Monday 23 March 2026 00:42:11 +0000 (0:00:00.113) 0:00:21.963 ********** 2026-03-23 00:42:16.191538 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.191546 | orchestrator | 2026-03-23 00:42:16.191554 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-23 00:42:16.191566 | orchestrator | Monday 23 March 2026 00:42:11 +0000 (0:00:00.087) 0:00:22.051 ********** 2026-03-23 00:42:16.191574 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.191582 | orchestrator | 2026-03-23 00:42:16.191590 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-23 00:42:16.191597 | orchestrator | Monday 23 March 2026 00:42:11 +0000 (0:00:00.085) 0:00:22.137 ********** 2026-03-23 00:42:16.191629 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:42:16.191639 | orchestrator | 2026-03-23 00:42:16.191647 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-23 00:42:16.191655 | orchestrator | Monday 23 March 2026 00:42:12 +0000 (0:00:00.099) 0:00:22.236 ********** 2026-03-23 00:42:16.191663 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1bf36823-02d4-5086-a00f-5e3efdd328af'}}) 2026-03-23 00:42:16.191671 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '92a7bb1e-121d-56dc-8fa7-94c9c65422a6'}}) 2026-03-23 00:42:16.191718 | orchestrator | 2026-03-23 00:42:16.191728 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-23 00:42:16.191737 | orchestrator | Monday 23 March 2026 00:42:12 +0000 (0:00:00.124) 0:00:22.361 ********** 2026-03-23 00:42:16.191747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1bf36823-02d4-5086-a00f-5e3efdd328af'}})  2026-03-23 00:42:16.191758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '92a7bb1e-121d-56dc-8fa7-94c9c65422a6'}})  2026-03-23 00:42:16.191767 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.191776 | orchestrator | 2026-03-23 00:42:16.191786 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-23 00:42:16.191795 | orchestrator | Monday 23 March 2026 00:42:12 +0000 (0:00:00.107) 0:00:22.468 ********** 2026-03-23 00:42:16.191804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1bf36823-02d4-5086-a00f-5e3efdd328af'}})  2026-03-23 00:42:16.191813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '92a7bb1e-121d-56dc-8fa7-94c9c65422a6'}})  2026-03-23 00:42:16.191823 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.191832 | orchestrator | 2026-03-23 00:42:16.191841 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-23 00:42:16.191850 | orchestrator | Monday 23 March 2026 00:42:12 +0000 (0:00:00.126) 0:00:22.595 ********** 2026-03-23 00:42:16.191859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1bf36823-02d4-5086-a00f-5e3efdd328af'}})  2026-03-23 00:42:16.191867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '92a7bb1e-121d-56dc-8fa7-94c9c65422a6'}})  2026-03-23 00:42:16.191876 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.191885 | orchestrator | 2026-03-23 00:42:16.191893 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-23 00:42:16.191902 | orchestrator | Monday 23 March 2026 00:42:12 +0000 (0:00:00.112) 0:00:22.707 ********** 2026-03-23 00:42:16.191911 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:42:16.191920 | orchestrator | 2026-03-23 00:42:16.191929 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-23 00:42:16.191938 | orchestrator | Monday 23 March 2026 00:42:12 +0000 (0:00:00.091) 0:00:22.799 ********** 2026-03-23 00:42:16.191946 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:42:16.191955 | orchestrator | 2026-03-23 00:42:16.191963 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-23 00:42:16.191972 | orchestrator | Monday 23 March 2026 00:42:12 +0000 (0:00:00.126) 0:00:22.926 ********** 2026-03-23 00:42:16.191996 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.192006 | orchestrator | 2026-03-23 00:42:16.192015 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-23 00:42:16.192023 | orchestrator | Monday 23 March 2026 00:42:12 +0000 (0:00:00.083) 0:00:23.009 ********** 2026-03-23 00:42:16.192033 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.192041 | orchestrator | 2026-03-23 00:42:16.192050 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-23 00:42:16.192059 | orchestrator | Monday 23 March 2026 00:42:13 +0000 (0:00:00.242) 0:00:23.251 ********** 2026-03-23 00:42:16.192068 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.192083 | orchestrator | 2026-03-23 00:42:16.192092 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-23 00:42:16.192100 | orchestrator | Monday 23 March 2026 00:42:13 +0000 (0:00:00.102) 0:00:23.354 ********** 2026-03-23 00:42:16.192108 | orchestrator | ok: [testbed-node-4] => { 2026-03-23 00:42:16.192116 | orchestrator |  "ceph_osd_devices": { 2026-03-23 00:42:16.192124 | orchestrator |  "sdb": { 2026-03-23 00:42:16.192132 | orchestrator |  "osd_lvm_uuid": "1bf36823-02d4-5086-a00f-5e3efdd328af" 2026-03-23 00:42:16.192140 | orchestrator |  }, 2026-03-23 00:42:16.192148 | orchestrator |  "sdc": { 2026-03-23 00:42:16.192155 | orchestrator |  "osd_lvm_uuid": "92a7bb1e-121d-56dc-8fa7-94c9c65422a6" 2026-03-23 00:42:16.192163 | orchestrator |  } 2026-03-23 00:42:16.192171 | orchestrator |  } 2026-03-23 00:42:16.192179 | orchestrator | } 2026-03-23 00:42:16.192187 | orchestrator | 2026-03-23 00:42:16.192195 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-23 00:42:16.192203 | orchestrator | Monday 23 March 2026 00:42:13 +0000 (0:00:00.109) 0:00:23.464 ********** 2026-03-23 00:42:16.192211 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.192219 | orchestrator | 2026-03-23 00:42:16.192227 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-23 00:42:16.192234 | orchestrator | Monday 23 March 2026 00:42:13 +0000 (0:00:00.119) 0:00:23.583 ********** 2026-03-23 00:42:16.192244 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.192258 | orchestrator | 2026-03-23 00:42:16.192266 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-23 00:42:16.192274 | orchestrator | Monday 23 March 2026 00:42:13 +0000 (0:00:00.127) 0:00:23.710 ********** 2026-03-23 00:42:16.192282 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:42:16.192290 | orchestrator | 2026-03-23 00:42:16.192297 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-23 00:42:16.192311 | orchestrator | Monday 23 March 2026 00:42:13 +0000 (0:00:00.135) 0:00:23.846 ********** 2026-03-23 00:42:16.192320 | orchestrator | changed: [testbed-node-4] => { 2026-03-23 00:42:16.192328 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-23 00:42:16.192336 | orchestrator |  "ceph_osd_devices": { 2026-03-23 00:42:16.192344 | orchestrator |  "sdb": { 2026-03-23 00:42:16.192352 | orchestrator |  "osd_lvm_uuid": "1bf36823-02d4-5086-a00f-5e3efdd328af" 2026-03-23 00:42:16.192360 | orchestrator |  }, 2026-03-23 00:42:16.192368 | orchestrator |  "sdc": { 2026-03-23 00:42:16.192375 | orchestrator |  "osd_lvm_uuid": "92a7bb1e-121d-56dc-8fa7-94c9c65422a6" 2026-03-23 00:42:16.192383 | orchestrator |  } 2026-03-23 00:42:16.192391 | orchestrator |  }, 2026-03-23 00:42:16.192399 | orchestrator |  "lvm_volumes": [ 2026-03-23 00:42:16.192408 | orchestrator |  { 2026-03-23 00:42:16.192420 | orchestrator |  "data": "osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af", 2026-03-23 00:42:16.192434 | orchestrator |  "data_vg": "ceph-1bf36823-02d4-5086-a00f-5e3efdd328af" 2026-03-23 00:42:16.192447 | orchestrator |  }, 2026-03-23 00:42:16.192461 | orchestrator |  { 2026-03-23 00:42:16.192473 | orchestrator |  "data": "osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6", 2026-03-23 00:42:16.192486 | orchestrator |  "data_vg": "ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6" 2026-03-23 00:42:16.192498 | orchestrator |  } 2026-03-23 00:42:16.192511 | orchestrator |  ] 2026-03-23 00:42:16.192522 | orchestrator |  } 2026-03-23 00:42:16.192536 | orchestrator | } 2026-03-23 00:42:16.192549 | orchestrator | 2026-03-23 00:42:16.192561 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-23 00:42:16.192575 | orchestrator | Monday 23 March 2026 00:42:13 +0000 (0:00:00.188) 0:00:24.035 ********** 2026-03-23 00:42:16.192589 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-23 00:42:16.192602 | orchestrator | 2026-03-23 00:42:16.192630 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-23 00:42:16.192638 | orchestrator | 2026-03-23 00:42:16.192646 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-23 00:42:16.192654 | orchestrator | Monday 23 March 2026 00:42:14 +0000 (0:00:00.978) 0:00:25.014 ********** 2026-03-23 00:42:16.192662 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-23 00:42:16.192670 | orchestrator | 2026-03-23 00:42:16.192711 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-23 00:42:16.192721 | orchestrator | Monday 23 March 2026 00:42:15 +0000 (0:00:00.510) 0:00:25.524 ********** 2026-03-23 00:42:16.192730 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:42:16.192738 | orchestrator | 2026-03-23 00:42:16.192745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:16.192753 | orchestrator | Monday 23 March 2026 00:42:15 +0000 (0:00:00.549) 0:00:26.073 ********** 2026-03-23 00:42:16.192761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-23 00:42:16.192769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-23 00:42:16.192777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-23 00:42:16.192785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-23 00:42:16.192792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-23 00:42:16.192809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-23 00:42:23.620009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-23 00:42:23.620157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-23 00:42:23.620183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-23 00:42:23.620202 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-23 00:42:23.620220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-23 00:42:23.620238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-23 00:42:23.620258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-23 00:42:23.620279 | orchestrator | 2026-03-23 00:42:23.620328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:23.620346 | orchestrator | Monday 23 March 2026 00:42:16 +0000 (0:00:00.364) 0:00:26.437 ********** 2026-03-23 00:42:23.620357 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.620369 | orchestrator | 2026-03-23 00:42:23.620380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:23.620391 | orchestrator | Monday 23 March 2026 00:42:16 +0000 (0:00:00.216) 0:00:26.654 ********** 2026-03-23 00:42:23.620402 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.620412 | orchestrator | 2026-03-23 00:42:23.620423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:23.620434 | orchestrator | Monday 23 March 2026 00:42:16 +0000 (0:00:00.213) 0:00:26.868 ********** 2026-03-23 00:42:23.620445 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.620456 | orchestrator | 2026-03-23 00:42:23.620466 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:23.620477 | orchestrator | Monday 23 March 2026 00:42:16 +0000 (0:00:00.168) 0:00:27.036 ********** 2026-03-23 00:42:23.620488 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.620499 | orchestrator | 2026-03-23 00:42:23.620510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:23.620521 | orchestrator | Monday 23 March 2026 00:42:17 +0000 (0:00:00.182) 0:00:27.218 ********** 2026-03-23 00:42:23.620560 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.620573 | orchestrator | 2026-03-23 00:42:23.620586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:23.620598 | orchestrator | Monday 23 March 2026 00:42:17 +0000 (0:00:00.187) 0:00:27.405 ********** 2026-03-23 00:42:23.620610 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.620623 | orchestrator | 2026-03-23 00:42:23.620635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:23.620646 | orchestrator | Monday 23 March 2026 00:42:17 +0000 (0:00:00.195) 0:00:27.601 ********** 2026-03-23 00:42:23.620657 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.620668 | orchestrator | 2026-03-23 00:42:23.620707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:23.620725 | orchestrator | Monday 23 March 2026 00:42:17 +0000 (0:00:00.164) 0:00:27.765 ********** 2026-03-23 00:42:23.620742 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.620760 | orchestrator | 2026-03-23 00:42:23.620781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:23.620801 | orchestrator | Monday 23 March 2026 00:42:17 +0000 (0:00:00.176) 0:00:27.941 ********** 2026-03-23 00:42:23.620832 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37) 2026-03-23 00:42:23.620852 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37) 2026-03-23 00:42:23.620871 | orchestrator | 2026-03-23 00:42:23.620888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:23.620906 | orchestrator | Monday 23 March 2026 00:42:18 +0000 (0:00:00.507) 0:00:28.449 ********** 2026-03-23 00:42:23.620949 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_59b4a83f-d9c4-4d19-8941-518108c7531d) 2026-03-23 00:42:23.620970 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_59b4a83f-d9c4-4d19-8941-518108c7531d) 2026-03-23 00:42:23.620989 | orchestrator | 2026-03-23 00:42:23.621007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:23.621027 | orchestrator | Monday 23 March 2026 00:42:18 +0000 (0:00:00.641) 0:00:29.091 ********** 2026-03-23 00:42:23.621046 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ff498ee2-e745-4049-bce7-87b4610f4b76) 2026-03-23 00:42:23.621064 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ff498ee2-e745-4049-bce7-87b4610f4b76) 2026-03-23 00:42:23.621082 | orchestrator | 2026-03-23 00:42:23.621102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:23.621121 | orchestrator | Monday 23 March 2026 00:42:19 +0000 (0:00:00.367) 0:00:29.459 ********** 2026-03-23 00:42:23.621137 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a6dc9e4a-bb14-4275-87ca-e10d4388766d) 2026-03-23 00:42:23.621155 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a6dc9e4a-bb14-4275-87ca-e10d4388766d) 2026-03-23 00:42:23.621174 | orchestrator | 2026-03-23 00:42:23.621193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:42:23.621214 | orchestrator | Monday 23 March 2026 00:42:19 +0000 (0:00:00.470) 0:00:29.929 ********** 2026-03-23 00:42:23.621235 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-23 00:42:23.621253 | orchestrator | 2026-03-23 00:42:23.621272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.621322 | orchestrator | Monday 23 March 2026 00:42:20 +0000 (0:00:00.293) 0:00:30.223 ********** 2026-03-23 00:42:23.621339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-23 00:42:23.621350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-23 00:42:23.621361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-23 00:42:23.621372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-23 00:42:23.621393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-23 00:42:23.621404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-23 00:42:23.621414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-23 00:42:23.621425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-23 00:42:23.621436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-23 00:42:23.621453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-23 00:42:23.621471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-23 00:42:23.621488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-23 00:42:23.621504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-23 00:42:23.621522 | orchestrator | 2026-03-23 00:42:23.621539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.621559 | orchestrator | Monday 23 March 2026 00:42:20 +0000 (0:00:00.400) 0:00:30.624 ********** 2026-03-23 00:42:23.621577 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.621597 | orchestrator | 2026-03-23 00:42:23.621616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.621635 | orchestrator | Monday 23 March 2026 00:42:20 +0000 (0:00:00.294) 0:00:30.919 ********** 2026-03-23 00:42:23.621653 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.621667 | orchestrator | 2026-03-23 00:42:23.621704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.621717 | orchestrator | Monday 23 March 2026 00:42:20 +0000 (0:00:00.235) 0:00:31.154 ********** 2026-03-23 00:42:23.621728 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.621739 | orchestrator | 2026-03-23 00:42:23.621750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.621761 | orchestrator | Monday 23 March 2026 00:42:21 +0000 (0:00:00.258) 0:00:31.413 ********** 2026-03-23 00:42:23.621771 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.621782 | orchestrator | 2026-03-23 00:42:23.621793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.621803 | orchestrator | Monday 23 March 2026 00:42:21 +0000 (0:00:00.161) 0:00:31.575 ********** 2026-03-23 00:42:23.621814 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.621825 | orchestrator | 2026-03-23 00:42:23.621835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.621846 | orchestrator | Monday 23 March 2026 00:42:21 +0000 (0:00:00.165) 0:00:31.740 ********** 2026-03-23 00:42:23.621857 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.621867 | orchestrator | 2026-03-23 00:42:23.621878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.621889 | orchestrator | Monday 23 March 2026 00:42:22 +0000 (0:00:00.470) 0:00:32.211 ********** 2026-03-23 00:42:23.621899 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.621910 | orchestrator | 2026-03-23 00:42:23.621921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.621932 | orchestrator | Monday 23 March 2026 00:42:22 +0000 (0:00:00.166) 0:00:32.378 ********** 2026-03-23 00:42:23.621943 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.621953 | orchestrator | 2026-03-23 00:42:23.621964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.621975 | orchestrator | Monday 23 March 2026 00:42:22 +0000 (0:00:00.177) 0:00:32.555 ********** 2026-03-23 00:42:23.621986 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-23 00:42:23.622008 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-23 00:42:23.622160 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-23 00:42:23.622178 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-23 00:42:23.622189 | orchestrator | 2026-03-23 00:42:23.622200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.622211 | orchestrator | Monday 23 March 2026 00:42:22 +0000 (0:00:00.582) 0:00:33.138 ********** 2026-03-23 00:42:23.622222 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.622233 | orchestrator | 2026-03-23 00:42:23.622244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.622254 | orchestrator | Monday 23 March 2026 00:42:23 +0000 (0:00:00.163) 0:00:33.301 ********** 2026-03-23 00:42:23.622265 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.622276 | orchestrator | 2026-03-23 00:42:23.622287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.622298 | orchestrator | Monday 23 March 2026 00:42:23 +0000 (0:00:00.164) 0:00:33.466 ********** 2026-03-23 00:42:23.622308 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.622319 | orchestrator | 2026-03-23 00:42:23.622342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:42:23.622354 | orchestrator | Monday 23 March 2026 00:42:23 +0000 (0:00:00.169) 0:00:33.636 ********** 2026-03-23 00:42:23.622365 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:23.622376 | orchestrator | 2026-03-23 00:42:23.622401 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-23 00:42:27.194618 | orchestrator | Monday 23 March 2026 00:42:23 +0000 (0:00:00.152) 0:00:33.788 ********** 2026-03-23 00:42:27.194820 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-23 00:42:27.194837 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-23 00:42:27.194849 | orchestrator | 2026-03-23 00:42:27.194860 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-23 00:42:27.194872 | orchestrator | Monday 23 March 2026 00:42:23 +0000 (0:00:00.139) 0:00:33.928 ********** 2026-03-23 00:42:27.194882 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:27.194893 | orchestrator | 2026-03-23 00:42:27.194904 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-23 00:42:27.194916 | orchestrator | Monday 23 March 2026 00:42:23 +0000 (0:00:00.104) 0:00:34.032 ********** 2026-03-23 00:42:27.194945 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:27.194957 | orchestrator | 2026-03-23 00:42:27.194968 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-23 00:42:27.194978 | orchestrator | Monday 23 March 2026 00:42:23 +0000 (0:00:00.110) 0:00:34.142 ********** 2026-03-23 00:42:27.195000 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:27.195011 | orchestrator | 2026-03-23 00:42:27.195023 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-23 00:42:27.195034 | orchestrator | Monday 23 March 2026 00:42:24 +0000 (0:00:00.094) 0:00:34.237 ********** 2026-03-23 00:42:27.195045 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:42:27.195057 | orchestrator | 2026-03-23 00:42:27.195067 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-23 00:42:27.195078 | orchestrator | Monday 23 March 2026 00:42:24 +0000 (0:00:00.244) 0:00:34.482 ********** 2026-03-23 00:42:27.195089 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b7e7e409-387b-5e35-af60-96efea6ce8aa'}}) 2026-03-23 00:42:27.195105 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6fa6fe99-be0d-55bf-a5b2-66c7db596be7'}}) 2026-03-23 00:42:27.195116 | orchestrator | 2026-03-23 00:42:27.195127 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-23 00:42:27.195138 | orchestrator | Monday 23 March 2026 00:42:24 +0000 (0:00:00.134) 0:00:34.616 ********** 2026-03-23 00:42:27.195149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b7e7e409-387b-5e35-af60-96efea6ce8aa'}})  2026-03-23 00:42:27.195183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6fa6fe99-be0d-55bf-a5b2-66c7db596be7'}})  2026-03-23 00:42:27.195195 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:27.195205 | orchestrator | 2026-03-23 00:42:27.195216 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-23 00:42:27.195227 | orchestrator | Monday 23 March 2026 00:42:24 +0000 (0:00:00.140) 0:00:34.757 ********** 2026-03-23 00:42:27.195237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b7e7e409-387b-5e35-af60-96efea6ce8aa'}})  2026-03-23 00:42:27.195248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6fa6fe99-be0d-55bf-a5b2-66c7db596be7'}})  2026-03-23 00:42:27.195259 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:27.195269 | orchestrator | 2026-03-23 00:42:27.195280 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-23 00:42:27.195291 | orchestrator | Monday 23 March 2026 00:42:24 +0000 (0:00:00.159) 0:00:34.916 ********** 2026-03-23 00:42:27.195301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b7e7e409-387b-5e35-af60-96efea6ce8aa'}})  2026-03-23 00:42:27.195312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6fa6fe99-be0d-55bf-a5b2-66c7db596be7'}})  2026-03-23 00:42:27.195323 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:27.195333 | orchestrator | 2026-03-23 00:42:27.195344 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-23 00:42:27.195354 | orchestrator | Monday 23 March 2026 00:42:24 +0000 (0:00:00.155) 0:00:35.072 ********** 2026-03-23 00:42:27.195365 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:42:27.195376 | orchestrator | 2026-03-23 00:42:27.195386 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-23 00:42:27.195397 | orchestrator | Monday 23 March 2026 00:42:25 +0000 (0:00:00.158) 0:00:35.230 ********** 2026-03-23 00:42:27.195407 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:42:27.195418 | orchestrator | 2026-03-23 00:42:27.195428 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-23 00:42:27.195439 | orchestrator | Monday 23 March 2026 00:42:25 +0000 (0:00:00.124) 0:00:35.354 ********** 2026-03-23 00:42:27.195449 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:27.195460 | orchestrator | 2026-03-23 00:42:27.195470 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-23 00:42:27.195481 | orchestrator | Monday 23 March 2026 00:42:25 +0000 (0:00:00.117) 0:00:35.472 ********** 2026-03-23 00:42:27.195491 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:27.195502 | orchestrator | 2026-03-23 00:42:27.195512 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-23 00:42:27.195523 | orchestrator | Monday 23 March 2026 00:42:25 +0000 (0:00:00.125) 0:00:35.597 ********** 2026-03-23 00:42:27.195533 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:27.195544 | orchestrator | 2026-03-23 00:42:27.195554 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-23 00:42:27.195565 | orchestrator | Monday 23 March 2026 00:42:25 +0000 (0:00:00.102) 0:00:35.700 ********** 2026-03-23 00:42:27.195576 | orchestrator | ok: [testbed-node-5] => { 2026-03-23 00:42:27.195586 | orchestrator |  "ceph_osd_devices": { 2026-03-23 00:42:27.195597 | orchestrator |  "sdb": { 2026-03-23 00:42:27.195641 | orchestrator |  "osd_lvm_uuid": "b7e7e409-387b-5e35-af60-96efea6ce8aa" 2026-03-23 00:42:27.195654 | orchestrator |  }, 2026-03-23 00:42:27.195665 | orchestrator |  "sdc": { 2026-03-23 00:42:27.195676 | orchestrator |  "osd_lvm_uuid": "6fa6fe99-be0d-55bf-a5b2-66c7db596be7" 2026-03-23 00:42:27.195711 | orchestrator |  } 2026-03-23 00:42:27.195722 | orchestrator |  } 2026-03-23 00:42:27.195733 | orchestrator | } 2026-03-23 00:42:27.195743 | orchestrator | 2026-03-23 00:42:27.195762 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-23 00:42:27.195773 | orchestrator | Monday 23 March 2026 00:42:25 +0000 (0:00:00.133) 0:00:35.833 ********** 2026-03-23 00:42:27.195783 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:27.195794 | orchestrator | 2026-03-23 00:42:27.195805 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-23 00:42:27.195815 | orchestrator | Monday 23 March 2026 00:42:25 +0000 (0:00:00.131) 0:00:35.965 ********** 2026-03-23 00:42:27.195826 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:27.195836 | orchestrator | 2026-03-23 00:42:27.195847 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-23 00:42:27.195858 | orchestrator | Monday 23 March 2026 00:42:26 +0000 (0:00:00.267) 0:00:36.233 ********** 2026-03-23 00:42:27.195868 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:42:27.195879 | orchestrator | 2026-03-23 00:42:27.195889 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-23 00:42:27.195900 | orchestrator | Monday 23 March 2026 00:42:26 +0000 (0:00:00.118) 0:00:36.351 ********** 2026-03-23 00:42:27.195910 | orchestrator | changed: [testbed-node-5] => { 2026-03-23 00:42:27.195921 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-23 00:42:27.195933 | orchestrator |  "ceph_osd_devices": { 2026-03-23 00:42:27.195943 | orchestrator |  "sdb": { 2026-03-23 00:42:27.195954 | orchestrator |  "osd_lvm_uuid": "b7e7e409-387b-5e35-af60-96efea6ce8aa" 2026-03-23 00:42:27.195965 | orchestrator |  }, 2026-03-23 00:42:27.195975 | orchestrator |  "sdc": { 2026-03-23 00:42:27.195986 | orchestrator |  "osd_lvm_uuid": "6fa6fe99-be0d-55bf-a5b2-66c7db596be7" 2026-03-23 00:42:27.195997 | orchestrator |  } 2026-03-23 00:42:27.196007 | orchestrator |  }, 2026-03-23 00:42:27.196018 | orchestrator |  "lvm_volumes": [ 2026-03-23 00:42:27.196029 | orchestrator |  { 2026-03-23 00:42:27.196039 | orchestrator |  "data": "osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa", 2026-03-23 00:42:27.196050 | orchestrator |  "data_vg": "ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa" 2026-03-23 00:42:27.196061 | orchestrator |  }, 2026-03-23 00:42:27.196075 | orchestrator |  { 2026-03-23 00:42:27.196087 | orchestrator |  "data": "osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7", 2026-03-23 00:42:27.196097 | orchestrator |  "data_vg": "ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7" 2026-03-23 00:42:27.196108 | orchestrator |  } 2026-03-23 00:42:27.196119 | orchestrator |  ] 2026-03-23 00:42:27.196130 | orchestrator |  } 2026-03-23 00:42:27.196140 | orchestrator | } 2026-03-23 00:42:27.196151 | orchestrator | 2026-03-23 00:42:27.196162 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-23 00:42:27.196173 | orchestrator | Monday 23 March 2026 00:42:26 +0000 (0:00:00.191) 0:00:36.543 ********** 2026-03-23 00:42:27.196184 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-23 00:42:27.196194 | orchestrator | 2026-03-23 00:42:27.196205 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:42:27.196216 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-23 00:42:27.196228 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-23 00:42:27.196239 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-23 00:42:27.196250 | orchestrator | 2026-03-23 00:42:27.196261 | orchestrator | 2026-03-23 00:42:27.196271 | orchestrator | 2026-03-23 00:42:27.196282 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:42:27.196292 | orchestrator | Monday 23 March 2026 00:42:27 +0000 (0:00:00.806) 0:00:37.349 ********** 2026-03-23 00:42:27.196310 | orchestrator | =============================================================================== 2026-03-23 00:42:27.196321 | orchestrator | Write configuration file ------------------------------------------------ 3.67s 2026-03-23 00:42:27.196331 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2026-03-23 00:42:27.196348 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2026-03-23 00:42:27.196359 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.96s 2026-03-23 00:42:27.196370 | orchestrator | Get initial list of available block devices ----------------------------- 0.95s 2026-03-23 00:42:27.196381 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-03-23 00:42:27.196392 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-23 00:42:27.196402 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-03-23 00:42:27.196413 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-03-23 00:42:27.196424 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2026-03-23 00:42:27.196434 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2026-03-23 00:42:27.196445 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2026-03-23 00:42:27.196456 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-03-23 00:42:27.196474 | orchestrator | Print configuration data ------------------------------------------------ 0.57s 2026-03-23 00:42:27.393235 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.56s 2026-03-23 00:42:27.393311 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.55s 2026-03-23 00:42:27.393319 | orchestrator | Print DB devices -------------------------------------------------------- 0.52s 2026-03-23 00:42:27.393325 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-03-23 00:42:27.393331 | orchestrator | Set WAL devices config data --------------------------------------------- 0.49s 2026-03-23 00:42:27.393336 | orchestrator | Add known links to the list of available block devices ------------------ 0.47s 2026-03-23 00:42:48.852557 | orchestrator | 2026-03-23 00:42:48 | INFO  | Task de023be6-6781-44fb-ac38-99a36050d28a (sync inventory) is running in background. Output coming soon. 2026-03-23 00:43:16.559713 | orchestrator | 2026-03-23 00:42:50 | INFO  | Starting group_vars file reorganization 2026-03-23 00:43:16.559854 | orchestrator | 2026-03-23 00:42:50 | INFO  | Moved 0 file(s) to their respective directories 2026-03-23 00:43:16.559874 | orchestrator | 2026-03-23 00:42:50 | INFO  | Group_vars file reorganization completed 2026-03-23 00:43:16.559886 | orchestrator | 2026-03-23 00:42:53 | INFO  | Starting variable preparation from inventory 2026-03-23 00:43:16.559898 | orchestrator | 2026-03-23 00:42:55 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-23 00:43:16.559910 | orchestrator | 2026-03-23 00:42:55 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-23 00:43:16.559941 | orchestrator | 2026-03-23 00:42:55 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-23 00:43:16.559962 | orchestrator | 2026-03-23 00:42:55 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-23 00:43:16.559981 | orchestrator | 2026-03-23 00:42:55 | INFO  | Variable preparation completed 2026-03-23 00:43:16.560000 | orchestrator | 2026-03-23 00:42:56 | INFO  | Starting inventory overwrite handling 2026-03-23 00:43:16.560019 | orchestrator | 2026-03-23 00:42:56 | INFO  | Handling group overwrites in 99-overwrite 2026-03-23 00:43:16.560038 | orchestrator | 2026-03-23 00:42:56 | INFO  | Removing group frr:children from 60-generic 2026-03-23 00:43:16.560090 | orchestrator | 2026-03-23 00:42:56 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-23 00:43:16.560108 | orchestrator | 2026-03-23 00:42:56 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-23 00:43:16.560128 | orchestrator | 2026-03-23 00:42:56 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-23 00:43:16.560147 | orchestrator | 2026-03-23 00:42:56 | INFO  | Handling group overwrites in 20-roles 2026-03-23 00:43:16.560166 | orchestrator | 2026-03-23 00:42:56 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-23 00:43:16.560186 | orchestrator | 2026-03-23 00:42:56 | INFO  | Removed 5 group(s) in total 2026-03-23 00:43:16.560203 | orchestrator | 2026-03-23 00:42:56 | INFO  | Inventory overwrite handling completed 2026-03-23 00:43:16.560221 | orchestrator | 2026-03-23 00:42:57 | INFO  | Starting merge of inventory files 2026-03-23 00:43:16.560259 | orchestrator | 2026-03-23 00:42:57 | INFO  | Inventory files merged successfully 2026-03-23 00:43:16.560290 | orchestrator | 2026-03-23 00:43:01 | INFO  | Generating minified hosts file 2026-03-23 00:43:16.560307 | orchestrator | 2026-03-23 00:43:03 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-03-23 00:43:16.560326 | orchestrator | 2026-03-23 00:43:03 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-03-23 00:43:16.560343 | orchestrator | 2026-03-23 00:43:04 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-23 00:43:16.560361 | orchestrator | 2026-03-23 00:43:15 | INFO  | Successfully wrote ClusterShell configuration 2026-03-23 00:43:16.560379 | orchestrator | [master 4272608] 2026-03-23-00-43 2026-03-23 00:43:16.560399 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-03-23 00:43:16.560420 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-03-23 00:43:16.560439 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-03-23 00:43:16.560458 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-03-23 00:43:18.170653 | orchestrator | 2026-03-23 00:43:18 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-23 00:43:18.232838 | orchestrator | 2026-03-23 00:43:18 | INFO  | Task e46010c4-d8a2-4942-8e14-34e3913f7d86 (ceph-create-lvm-devices) was prepared for execution. 2026-03-23 00:43:18.232932 | orchestrator | 2026-03-23 00:43:18 | INFO  | It takes a moment until task e46010c4-d8a2-4942-8e14-34e3913f7d86 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-23 00:43:29.289595 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-23 00:43:29.289672 | orchestrator | 2.16.14 2026-03-23 00:43:29.289742 | orchestrator | 2026-03-23 00:43:29.289749 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-23 00:43:29.289754 | orchestrator | 2026-03-23 00:43:29.289758 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-23 00:43:29.289762 | orchestrator | Monday 23 March 2026 00:43:22 +0000 (0:00:00.202) 0:00:00.202 ********** 2026-03-23 00:43:29.289767 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-23 00:43:29.289771 | orchestrator | 2026-03-23 00:43:29.289775 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-23 00:43:29.289779 | orchestrator | Monday 23 March 2026 00:43:22 +0000 (0:00:00.232) 0:00:00.435 ********** 2026-03-23 00:43:29.289783 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:43:29.289787 | orchestrator | 2026-03-23 00:43:29.289791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.289795 | orchestrator | Monday 23 March 2026 00:43:22 +0000 (0:00:00.209) 0:00:00.645 ********** 2026-03-23 00:43:29.289813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-23 00:43:29.289817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-23 00:43:29.289820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-23 00:43:29.289824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-23 00:43:29.289828 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-23 00:43:29.289832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-23 00:43:29.289836 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-23 00:43:29.289840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-23 00:43:29.289843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-23 00:43:29.289847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-23 00:43:29.289851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-23 00:43:29.289855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-23 00:43:29.289858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-23 00:43:29.289862 | orchestrator | 2026-03-23 00:43:29.289866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.289870 | orchestrator | Monday 23 March 2026 00:43:23 +0000 (0:00:00.412) 0:00:01.058 ********** 2026-03-23 00:43:29.289873 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.289877 | orchestrator | 2026-03-23 00:43:29.289881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.289885 | orchestrator | Monday 23 March 2026 00:43:23 +0000 (0:00:00.350) 0:00:01.408 ********** 2026-03-23 00:43:29.289888 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.289892 | orchestrator | 2026-03-23 00:43:29.289896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.289899 | orchestrator | Monday 23 March 2026 00:43:23 +0000 (0:00:00.175) 0:00:01.583 ********** 2026-03-23 00:43:29.289914 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.289918 | orchestrator | 2026-03-23 00:43:29.289922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.289926 | orchestrator | Monday 23 March 2026 00:43:23 +0000 (0:00:00.172) 0:00:01.756 ********** 2026-03-23 00:43:29.289929 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.289933 | orchestrator | 2026-03-23 00:43:29.289937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.289941 | orchestrator | Monday 23 March 2026 00:43:24 +0000 (0:00:00.170) 0:00:01.927 ********** 2026-03-23 00:43:29.289944 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.289948 | orchestrator | 2026-03-23 00:43:29.289952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.289955 | orchestrator | Monday 23 March 2026 00:43:24 +0000 (0:00:00.180) 0:00:02.107 ********** 2026-03-23 00:43:29.289959 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.289963 | orchestrator | 2026-03-23 00:43:29.289966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.289970 | orchestrator | Monday 23 March 2026 00:43:24 +0000 (0:00:00.186) 0:00:02.294 ********** 2026-03-23 00:43:29.289974 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.289978 | orchestrator | 2026-03-23 00:43:29.289982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.289985 | orchestrator | Monday 23 March 2026 00:43:24 +0000 (0:00:00.182) 0:00:02.477 ********** 2026-03-23 00:43:29.289989 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.289996 | orchestrator | 2026-03-23 00:43:29.290000 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.290004 | orchestrator | Monday 23 March 2026 00:43:24 +0000 (0:00:00.177) 0:00:02.655 ********** 2026-03-23 00:43:29.290008 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15) 2026-03-23 00:43:29.290012 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15) 2026-03-23 00:43:29.290052 | orchestrator | 2026-03-23 00:43:29.290059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.290082 | orchestrator | Monday 23 March 2026 00:43:25 +0000 (0:00:00.449) 0:00:03.104 ********** 2026-03-23 00:43:29.290088 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1d2a1acf-b303-4df2-8937-2ee8f9bbf12f) 2026-03-23 00:43:29.290095 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1d2a1acf-b303-4df2-8937-2ee8f9bbf12f) 2026-03-23 00:43:29.290100 | orchestrator | 2026-03-23 00:43:29.290106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.290111 | orchestrator | Monday 23 March 2026 00:43:25 +0000 (0:00:00.366) 0:00:03.471 ********** 2026-03-23 00:43:29.290117 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c3b20d12-9473-438c-9aa2-c72737b9e6d0) 2026-03-23 00:43:29.290123 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c3b20d12-9473-438c-9aa2-c72737b9e6d0) 2026-03-23 00:43:29.290129 | orchestrator | 2026-03-23 00:43:29.290135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.290140 | orchestrator | Monday 23 March 2026 00:43:26 +0000 (0:00:00.614) 0:00:04.086 ********** 2026-03-23 00:43:29.290146 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6d03a194-715d-49d1-b802-c824960a80c4) 2026-03-23 00:43:29.290152 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6d03a194-715d-49d1-b802-c824960a80c4) 2026-03-23 00:43:29.290159 | orchestrator | 2026-03-23 00:43:29.290166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:29.290173 | orchestrator | Monday 23 March 2026 00:43:26 +0000 (0:00:00.622) 0:00:04.708 ********** 2026-03-23 00:43:29.290179 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-23 00:43:29.290186 | orchestrator | 2026-03-23 00:43:29.290192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:29.290205 | orchestrator | Monday 23 March 2026 00:43:27 +0000 (0:00:00.646) 0:00:05.355 ********** 2026-03-23 00:43:29.290211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-23 00:43:29.290218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-23 00:43:29.290224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-23 00:43:29.290231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-23 00:43:29.290237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-23 00:43:29.290244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-23 00:43:29.290250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-23 00:43:29.290257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-23 00:43:29.290261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-23 00:43:29.290266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-23 00:43:29.290270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-23 00:43:29.290274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-23 00:43:29.290283 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-23 00:43:29.290287 | orchestrator | 2026-03-23 00:43:29.290291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:29.290296 | orchestrator | Monday 23 March 2026 00:43:27 +0000 (0:00:00.383) 0:00:05.739 ********** 2026-03-23 00:43:29.290300 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.290304 | orchestrator | 2026-03-23 00:43:29.290308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:29.290312 | orchestrator | Monday 23 March 2026 00:43:28 +0000 (0:00:00.194) 0:00:05.933 ********** 2026-03-23 00:43:29.290317 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.290321 | orchestrator | 2026-03-23 00:43:29.290325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:29.290329 | orchestrator | Monday 23 March 2026 00:43:28 +0000 (0:00:00.239) 0:00:06.173 ********** 2026-03-23 00:43:29.290333 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.290337 | orchestrator | 2026-03-23 00:43:29.290341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:29.290346 | orchestrator | Monday 23 March 2026 00:43:28 +0000 (0:00:00.229) 0:00:06.403 ********** 2026-03-23 00:43:29.290350 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.290354 | orchestrator | 2026-03-23 00:43:29.290358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:29.290362 | orchestrator | Monday 23 March 2026 00:43:28 +0000 (0:00:00.214) 0:00:06.617 ********** 2026-03-23 00:43:29.290367 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.290372 | orchestrator | 2026-03-23 00:43:29.290378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:29.290384 | orchestrator | Monday 23 March 2026 00:43:28 +0000 (0:00:00.186) 0:00:06.804 ********** 2026-03-23 00:43:29.290390 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.290396 | orchestrator | 2026-03-23 00:43:29.290403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:29.290410 | orchestrator | Monday 23 March 2026 00:43:29 +0000 (0:00:00.183) 0:00:06.987 ********** 2026-03-23 00:43:29.290417 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:29.290421 | orchestrator | 2026-03-23 00:43:29.290430 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:36.658530 | orchestrator | Monday 23 March 2026 00:43:29 +0000 (0:00:00.178) 0:00:07.166 ********** 2026-03-23 00:43:36.658630 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.658642 | orchestrator | 2026-03-23 00:43:36.658648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:36.658654 | orchestrator | Monday 23 March 2026 00:43:29 +0000 (0:00:00.199) 0:00:07.365 ********** 2026-03-23 00:43:36.658659 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-23 00:43:36.658665 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-23 00:43:36.658670 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-23 00:43:36.658709 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-23 00:43:36.658715 | orchestrator | 2026-03-23 00:43:36.658720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:36.658726 | orchestrator | Monday 23 March 2026 00:43:30 +0000 (0:00:00.885) 0:00:08.250 ********** 2026-03-23 00:43:36.658731 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.658736 | orchestrator | 2026-03-23 00:43:36.658741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:36.658746 | orchestrator | Monday 23 March 2026 00:43:30 +0000 (0:00:00.205) 0:00:08.456 ********** 2026-03-23 00:43:36.658751 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.658755 | orchestrator | 2026-03-23 00:43:36.658760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:36.658782 | orchestrator | Monday 23 March 2026 00:43:30 +0000 (0:00:00.187) 0:00:08.644 ********** 2026-03-23 00:43:36.658788 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.658793 | orchestrator | 2026-03-23 00:43:36.658799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:36.658804 | orchestrator | Monday 23 March 2026 00:43:30 +0000 (0:00:00.197) 0:00:08.841 ********** 2026-03-23 00:43:36.658809 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.658815 | orchestrator | 2026-03-23 00:43:36.658820 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-23 00:43:36.658826 | orchestrator | Monday 23 March 2026 00:43:31 +0000 (0:00:00.175) 0:00:09.017 ********** 2026-03-23 00:43:36.658831 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.658836 | orchestrator | 2026-03-23 00:43:36.658842 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-23 00:43:36.658847 | orchestrator | Monday 23 March 2026 00:43:31 +0000 (0:00:00.125) 0:00:09.143 ********** 2026-03-23 00:43:36.658854 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4e8fe5fb-1ce5-58e9-8668-0121db885e3a'}}) 2026-03-23 00:43:36.658859 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '64892dc7-40b9-50f4-a971-7ffdf1a56e40'}}) 2026-03-23 00:43:36.658865 | orchestrator | 2026-03-23 00:43:36.658870 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-23 00:43:36.658875 | orchestrator | Monday 23 March 2026 00:43:31 +0000 (0:00:00.163) 0:00:09.307 ********** 2026-03-23 00:43:36.658882 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'}) 2026-03-23 00:43:36.658888 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'}) 2026-03-23 00:43:36.658894 | orchestrator | 2026-03-23 00:43:36.658900 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-23 00:43:36.658905 | orchestrator | Monday 23 March 2026 00:43:33 +0000 (0:00:01.940) 0:00:11.247 ********** 2026-03-23 00:43:36.658910 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:36.658929 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:36.658935 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.658941 | orchestrator | 2026-03-23 00:43:36.658946 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-23 00:43:36.658951 | orchestrator | Monday 23 March 2026 00:43:33 +0000 (0:00:00.125) 0:00:11.373 ********** 2026-03-23 00:43:36.658957 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'}) 2026-03-23 00:43:36.658962 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'}) 2026-03-23 00:43:36.658968 | orchestrator | 2026-03-23 00:43:36.658973 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-23 00:43:36.658979 | orchestrator | Monday 23 March 2026 00:43:35 +0000 (0:00:01.525) 0:00:12.898 ********** 2026-03-23 00:43:36.658984 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:36.658989 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:36.658995 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.659000 | orchestrator | 2026-03-23 00:43:36.659006 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-23 00:43:36.659018 | orchestrator | Monday 23 March 2026 00:43:35 +0000 (0:00:00.125) 0:00:13.023 ********** 2026-03-23 00:43:36.659036 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.659042 | orchestrator | 2026-03-23 00:43:36.659047 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-23 00:43:36.659053 | orchestrator | Monday 23 March 2026 00:43:35 +0000 (0:00:00.126) 0:00:13.150 ********** 2026-03-23 00:43:36.659058 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:36.659063 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:36.659069 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.659074 | orchestrator | 2026-03-23 00:43:36.659079 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-23 00:43:36.659085 | orchestrator | Monday 23 March 2026 00:43:35 +0000 (0:00:00.278) 0:00:13.429 ********** 2026-03-23 00:43:36.659090 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.659097 | orchestrator | 2026-03-23 00:43:36.659103 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-23 00:43:36.659109 | orchestrator | Monday 23 March 2026 00:43:35 +0000 (0:00:00.123) 0:00:13.552 ********** 2026-03-23 00:43:36.659115 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:36.659121 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:36.659127 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.659133 | orchestrator | 2026-03-23 00:43:36.659143 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-23 00:43:36.659149 | orchestrator | Monday 23 March 2026 00:43:35 +0000 (0:00:00.127) 0:00:13.679 ********** 2026-03-23 00:43:36.659156 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.659162 | orchestrator | 2026-03-23 00:43:36.659168 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-23 00:43:36.659174 | orchestrator | Monday 23 March 2026 00:43:35 +0000 (0:00:00.111) 0:00:13.791 ********** 2026-03-23 00:43:36.659180 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:36.659186 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:36.659193 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.659199 | orchestrator | 2026-03-23 00:43:36.659205 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-23 00:43:36.659211 | orchestrator | Monday 23 March 2026 00:43:36 +0000 (0:00:00.120) 0:00:13.911 ********** 2026-03-23 00:43:36.659217 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:43:36.659223 | orchestrator | 2026-03-23 00:43:36.659229 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-23 00:43:36.659235 | orchestrator | Monday 23 March 2026 00:43:36 +0000 (0:00:00.113) 0:00:14.024 ********** 2026-03-23 00:43:36.659241 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:36.659248 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:36.659254 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.659260 | orchestrator | 2026-03-23 00:43:36.659266 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-23 00:43:36.659277 | orchestrator | Monday 23 March 2026 00:43:36 +0000 (0:00:00.124) 0:00:14.149 ********** 2026-03-23 00:43:36.659283 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:36.659289 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:36.659296 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.659302 | orchestrator | 2026-03-23 00:43:36.659308 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-23 00:43:36.659314 | orchestrator | Monday 23 March 2026 00:43:36 +0000 (0:00:00.131) 0:00:14.280 ********** 2026-03-23 00:43:36.659320 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:36.659326 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:36.659332 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.659338 | orchestrator | 2026-03-23 00:43:36.659344 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-23 00:43:36.659350 | orchestrator | Monday 23 March 2026 00:43:36 +0000 (0:00:00.130) 0:00:14.411 ********** 2026-03-23 00:43:36.659357 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:36.659363 | orchestrator | 2026-03-23 00:43:36.659369 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-23 00:43:36.659379 | orchestrator | Monday 23 March 2026 00:43:36 +0000 (0:00:00.124) 0:00:14.536 ********** 2026-03-23 00:43:42.665448 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.665526 | orchestrator | 2026-03-23 00:43:42.665534 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-23 00:43:42.665541 | orchestrator | Monday 23 March 2026 00:43:36 +0000 (0:00:00.114) 0:00:14.650 ********** 2026-03-23 00:43:42.665547 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.665552 | orchestrator | 2026-03-23 00:43:42.665557 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-23 00:43:42.665563 | orchestrator | Monday 23 March 2026 00:43:36 +0000 (0:00:00.118) 0:00:14.768 ********** 2026-03-23 00:43:42.665568 | orchestrator | ok: [testbed-node-3] => { 2026-03-23 00:43:42.665575 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-23 00:43:42.665580 | orchestrator | } 2026-03-23 00:43:42.665586 | orchestrator | 2026-03-23 00:43:42.665591 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-23 00:43:42.665597 | orchestrator | Monday 23 March 2026 00:43:37 +0000 (0:00:00.276) 0:00:15.045 ********** 2026-03-23 00:43:42.665602 | orchestrator | ok: [testbed-node-3] => { 2026-03-23 00:43:42.665607 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-23 00:43:42.665612 | orchestrator | } 2026-03-23 00:43:42.665617 | orchestrator | 2026-03-23 00:43:42.665622 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-23 00:43:42.665628 | orchestrator | Monday 23 March 2026 00:43:37 +0000 (0:00:00.140) 0:00:15.186 ********** 2026-03-23 00:43:42.665633 | orchestrator | ok: [testbed-node-3] => { 2026-03-23 00:43:42.665638 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-23 00:43:42.665643 | orchestrator | } 2026-03-23 00:43:42.665648 | orchestrator | 2026-03-23 00:43:42.665653 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-23 00:43:42.665658 | orchestrator | Monday 23 March 2026 00:43:37 +0000 (0:00:00.122) 0:00:15.308 ********** 2026-03-23 00:43:42.665664 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:43:42.665669 | orchestrator | 2026-03-23 00:43:42.665674 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-23 00:43:42.665717 | orchestrator | Monday 23 March 2026 00:43:38 +0000 (0:00:00.652) 0:00:15.961 ********** 2026-03-23 00:43:42.665739 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:43:42.665744 | orchestrator | 2026-03-23 00:43:42.665749 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-23 00:43:42.665755 | orchestrator | Monday 23 March 2026 00:43:38 +0000 (0:00:00.505) 0:00:16.466 ********** 2026-03-23 00:43:42.665760 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:43:42.665765 | orchestrator | 2026-03-23 00:43:42.665770 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-23 00:43:42.665775 | orchestrator | Monday 23 March 2026 00:43:39 +0000 (0:00:00.528) 0:00:16.995 ********** 2026-03-23 00:43:42.665780 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:43:42.665785 | orchestrator | 2026-03-23 00:43:42.665790 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-23 00:43:42.665796 | orchestrator | Monday 23 March 2026 00:43:39 +0000 (0:00:00.139) 0:00:17.134 ********** 2026-03-23 00:43:42.665801 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.665806 | orchestrator | 2026-03-23 00:43:42.665811 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-23 00:43:42.665816 | orchestrator | Monday 23 March 2026 00:43:39 +0000 (0:00:00.114) 0:00:17.249 ********** 2026-03-23 00:43:42.665821 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.665826 | orchestrator | 2026-03-23 00:43:42.665831 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-23 00:43:42.665836 | orchestrator | Monday 23 March 2026 00:43:39 +0000 (0:00:00.107) 0:00:17.357 ********** 2026-03-23 00:43:42.665841 | orchestrator | ok: [testbed-node-3] => { 2026-03-23 00:43:42.665846 | orchestrator |  "vgs_report": { 2026-03-23 00:43:42.665851 | orchestrator |  "vg": [] 2026-03-23 00:43:42.665857 | orchestrator |  } 2026-03-23 00:43:42.665862 | orchestrator | } 2026-03-23 00:43:42.665867 | orchestrator | 2026-03-23 00:43:42.665872 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-23 00:43:42.665877 | orchestrator | Monday 23 March 2026 00:43:39 +0000 (0:00:00.135) 0:00:17.493 ********** 2026-03-23 00:43:42.665882 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.665887 | orchestrator | 2026-03-23 00:43:42.665892 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-23 00:43:42.665897 | orchestrator | Monday 23 March 2026 00:43:39 +0000 (0:00:00.112) 0:00:17.605 ********** 2026-03-23 00:43:42.665902 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.665907 | orchestrator | 2026-03-23 00:43:42.665912 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-23 00:43:42.665918 | orchestrator | Monday 23 March 2026 00:43:39 +0000 (0:00:00.120) 0:00:17.726 ********** 2026-03-23 00:43:42.665923 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.665928 | orchestrator | 2026-03-23 00:43:42.665933 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-23 00:43:42.665938 | orchestrator | Monday 23 March 2026 00:43:40 +0000 (0:00:00.273) 0:00:18.000 ********** 2026-03-23 00:43:42.665943 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.665948 | orchestrator | 2026-03-23 00:43:42.665953 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-23 00:43:42.665958 | orchestrator | Monday 23 March 2026 00:43:40 +0000 (0:00:00.139) 0:00:18.139 ********** 2026-03-23 00:43:42.665963 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.665968 | orchestrator | 2026-03-23 00:43:42.665973 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-23 00:43:42.665978 | orchestrator | Monday 23 March 2026 00:43:40 +0000 (0:00:00.129) 0:00:18.268 ********** 2026-03-23 00:43:42.665983 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.665988 | orchestrator | 2026-03-23 00:43:42.665993 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-23 00:43:42.665998 | orchestrator | Monday 23 March 2026 00:43:40 +0000 (0:00:00.117) 0:00:18.385 ********** 2026-03-23 00:43:42.666003 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.666013 | orchestrator | 2026-03-23 00:43:42.666100 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-23 00:43:42.666107 | orchestrator | Monday 23 March 2026 00:43:40 +0000 (0:00:00.129) 0:00:18.515 ********** 2026-03-23 00:43:42.666127 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.666132 | orchestrator | 2026-03-23 00:43:42.666149 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-23 00:43:42.666154 | orchestrator | Monday 23 March 2026 00:43:40 +0000 (0:00:00.141) 0:00:18.656 ********** 2026-03-23 00:43:42.666159 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.666164 | orchestrator | 2026-03-23 00:43:42.666170 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-23 00:43:42.666175 | orchestrator | Monday 23 March 2026 00:43:40 +0000 (0:00:00.123) 0:00:18.780 ********** 2026-03-23 00:43:42.666180 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.666185 | orchestrator | 2026-03-23 00:43:42.666190 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-23 00:43:42.666195 | orchestrator | Monday 23 March 2026 00:43:41 +0000 (0:00:00.129) 0:00:18.910 ********** 2026-03-23 00:43:42.666200 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.666205 | orchestrator | 2026-03-23 00:43:42.666210 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-23 00:43:42.666215 | orchestrator | Monday 23 March 2026 00:43:41 +0000 (0:00:00.123) 0:00:19.034 ********** 2026-03-23 00:43:42.666220 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.666226 | orchestrator | 2026-03-23 00:43:42.666231 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-23 00:43:42.666236 | orchestrator | Monday 23 March 2026 00:43:41 +0000 (0:00:00.118) 0:00:19.153 ********** 2026-03-23 00:43:42.666241 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.666246 | orchestrator | 2026-03-23 00:43:42.666251 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-23 00:43:42.666256 | orchestrator | Monday 23 March 2026 00:43:41 +0000 (0:00:00.127) 0:00:19.280 ********** 2026-03-23 00:43:42.666261 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.666266 | orchestrator | 2026-03-23 00:43:42.666274 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-23 00:43:42.666279 | orchestrator | Monday 23 March 2026 00:43:41 +0000 (0:00:00.127) 0:00:19.407 ********** 2026-03-23 00:43:42.666286 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:42.666293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:42.666298 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.666303 | orchestrator | 2026-03-23 00:43:42.666308 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-23 00:43:42.666313 | orchestrator | Monday 23 March 2026 00:43:41 +0000 (0:00:00.184) 0:00:19.592 ********** 2026-03-23 00:43:42.666318 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:42.666323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:42.666329 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.666334 | orchestrator | 2026-03-23 00:43:42.666339 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-23 00:43:42.666344 | orchestrator | Monday 23 March 2026 00:43:42 +0000 (0:00:00.387) 0:00:19.979 ********** 2026-03-23 00:43:42.666349 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:42.666354 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:42.666365 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.666370 | orchestrator | 2026-03-23 00:43:42.666375 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-23 00:43:42.666380 | orchestrator | Monday 23 March 2026 00:43:42 +0000 (0:00:00.165) 0:00:20.144 ********** 2026-03-23 00:43:42.666385 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:42.666390 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:42.666395 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.666400 | orchestrator | 2026-03-23 00:43:42.666406 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-23 00:43:42.666411 | orchestrator | Monday 23 March 2026 00:43:42 +0000 (0:00:00.181) 0:00:20.326 ********** 2026-03-23 00:43:42.666416 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:42.666421 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:42.666426 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:42.666431 | orchestrator | 2026-03-23 00:43:42.666436 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-23 00:43:42.666441 | orchestrator | Monday 23 March 2026 00:43:42 +0000 (0:00:00.154) 0:00:20.481 ********** 2026-03-23 00:43:42.666450 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:48.345424 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:48.345537 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:48.345551 | orchestrator | 2026-03-23 00:43:48.345559 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-23 00:43:48.345569 | orchestrator | Monday 23 March 2026 00:43:42 +0000 (0:00:00.150) 0:00:20.632 ********** 2026-03-23 00:43:48.345576 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:48.345584 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:48.345591 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:48.345598 | orchestrator | 2026-03-23 00:43:48.345605 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-23 00:43:48.345648 | orchestrator | Monday 23 March 2026 00:43:42 +0000 (0:00:00.163) 0:00:20.795 ********** 2026-03-23 00:43:48.345656 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:48.345714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:48.345723 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:48.345731 | orchestrator | 2026-03-23 00:43:48.345738 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-23 00:43:48.345745 | orchestrator | Monday 23 March 2026 00:43:43 +0000 (0:00:00.158) 0:00:20.953 ********** 2026-03-23 00:43:48.345752 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:43:48.345760 | orchestrator | 2026-03-23 00:43:48.345789 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-23 00:43:48.345796 | orchestrator | Monday 23 March 2026 00:43:43 +0000 (0:00:00.527) 0:00:21.480 ********** 2026-03-23 00:43:48.345803 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:43:48.345809 | orchestrator | 2026-03-23 00:43:48.345816 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-23 00:43:48.345823 | orchestrator | Monday 23 March 2026 00:43:44 +0000 (0:00:00.588) 0:00:22.068 ********** 2026-03-23 00:43:48.345829 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:43:48.345836 | orchestrator | 2026-03-23 00:43:48.345843 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-23 00:43:48.345849 | orchestrator | Monday 23 March 2026 00:43:44 +0000 (0:00:00.169) 0:00:22.238 ********** 2026-03-23 00:43:48.345857 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'vg_name': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'}) 2026-03-23 00:43:48.345865 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'vg_name': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'}) 2026-03-23 00:43:48.345871 | orchestrator | 2026-03-23 00:43:48.345878 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-23 00:43:48.345885 | orchestrator | Monday 23 March 2026 00:43:44 +0000 (0:00:00.175) 0:00:22.414 ********** 2026-03-23 00:43:48.345892 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:48.345899 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:48.345905 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:48.345912 | orchestrator | 2026-03-23 00:43:48.345919 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-23 00:43:48.345926 | orchestrator | Monday 23 March 2026 00:43:44 +0000 (0:00:00.147) 0:00:22.561 ********** 2026-03-23 00:43:48.345932 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:48.345939 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:48.345946 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:48.345953 | orchestrator | 2026-03-23 00:43:48.345960 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-23 00:43:48.345967 | orchestrator | Monday 23 March 2026 00:43:45 +0000 (0:00:00.368) 0:00:22.929 ********** 2026-03-23 00:43:48.345974 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'})  2026-03-23 00:43:48.345980 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'})  2026-03-23 00:43:48.345987 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:43:48.345994 | orchestrator | 2026-03-23 00:43:48.346000 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-23 00:43:48.346007 | orchestrator | Monday 23 March 2026 00:43:45 +0000 (0:00:00.160) 0:00:23.090 ********** 2026-03-23 00:43:48.346087 | orchestrator | ok: [testbed-node-3] => { 2026-03-23 00:43:48.346097 | orchestrator |  "lvm_report": { 2026-03-23 00:43:48.346105 | orchestrator |  "lv": [ 2026-03-23 00:43:48.346112 | orchestrator |  { 2026-03-23 00:43:48.346120 | orchestrator |  "lv_name": "osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a", 2026-03-23 00:43:48.346128 | orchestrator |  "vg_name": "ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a" 2026-03-23 00:43:48.346134 | orchestrator |  }, 2026-03-23 00:43:48.346148 | orchestrator |  { 2026-03-23 00:43:48.346155 | orchestrator |  "lv_name": "osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40", 2026-03-23 00:43:48.346162 | orchestrator |  "vg_name": "ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40" 2026-03-23 00:43:48.346168 | orchestrator |  } 2026-03-23 00:43:48.346175 | orchestrator |  ], 2026-03-23 00:43:48.346182 | orchestrator |  "pv": [ 2026-03-23 00:43:48.346189 | orchestrator |  { 2026-03-23 00:43:48.346195 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-23 00:43:48.346202 | orchestrator |  "vg_name": "ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a" 2026-03-23 00:43:48.346209 | orchestrator |  }, 2026-03-23 00:43:48.346216 | orchestrator |  { 2026-03-23 00:43:48.346222 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-23 00:43:48.346229 | orchestrator |  "vg_name": "ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40" 2026-03-23 00:43:48.346236 | orchestrator |  } 2026-03-23 00:43:48.346243 | orchestrator |  ] 2026-03-23 00:43:48.346249 | orchestrator |  } 2026-03-23 00:43:48.346257 | orchestrator | } 2026-03-23 00:43:48.346264 | orchestrator | 2026-03-23 00:43:48.346271 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-23 00:43:48.346278 | orchestrator | 2026-03-23 00:43:48.346285 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-23 00:43:48.346292 | orchestrator | Monday 23 March 2026 00:43:45 +0000 (0:00:00.317) 0:00:23.407 ********** 2026-03-23 00:43:48.346299 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-23 00:43:48.346306 | orchestrator | 2026-03-23 00:43:48.346313 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-23 00:43:48.346320 | orchestrator | Monday 23 March 2026 00:43:45 +0000 (0:00:00.276) 0:00:23.684 ********** 2026-03-23 00:43:48.346328 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:43:48.346334 | orchestrator | 2026-03-23 00:43:48.346341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:48.346348 | orchestrator | Monday 23 March 2026 00:43:46 +0000 (0:00:00.263) 0:00:23.948 ********** 2026-03-23 00:43:48.346355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-23 00:43:48.346362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-23 00:43:48.346369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-23 00:43:48.346376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-23 00:43:48.346383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-23 00:43:48.346390 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-23 00:43:48.346396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-23 00:43:48.346403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-23 00:43:48.346410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-23 00:43:48.346423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-23 00:43:48.346431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-23 00:43:48.346438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-23 00:43:48.346444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-23 00:43:48.346451 | orchestrator | 2026-03-23 00:43:48.346458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:48.346464 | orchestrator | Monday 23 March 2026 00:43:46 +0000 (0:00:00.457) 0:00:24.405 ********** 2026-03-23 00:43:48.346471 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:48.346483 | orchestrator | 2026-03-23 00:43:48.346489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:48.346499 | orchestrator | Monday 23 March 2026 00:43:46 +0000 (0:00:00.193) 0:00:24.599 ********** 2026-03-23 00:43:48.346506 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:48.346513 | orchestrator | 2026-03-23 00:43:48.346520 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:48.346527 | orchestrator | Monday 23 March 2026 00:43:46 +0000 (0:00:00.280) 0:00:24.879 ********** 2026-03-23 00:43:48.346534 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:48.346540 | orchestrator | 2026-03-23 00:43:48.346547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:48.346554 | orchestrator | Monday 23 March 2026 00:43:47 +0000 (0:00:00.189) 0:00:25.068 ********** 2026-03-23 00:43:48.346560 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:48.346567 | orchestrator | 2026-03-23 00:43:48.346574 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:48.346581 | orchestrator | Monday 23 March 2026 00:43:47 +0000 (0:00:00.706) 0:00:25.775 ********** 2026-03-23 00:43:48.346587 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:48.346594 | orchestrator | 2026-03-23 00:43:48.346601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:48.346607 | orchestrator | Monday 23 March 2026 00:43:48 +0000 (0:00:00.230) 0:00:26.006 ********** 2026-03-23 00:43:48.346614 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:48.346621 | orchestrator | 2026-03-23 00:43:48.346634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:58.124367 | orchestrator | Monday 23 March 2026 00:43:48 +0000 (0:00:00.218) 0:00:26.224 ********** 2026-03-23 00:43:58.124456 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.124469 | orchestrator | 2026-03-23 00:43:58.124478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:58.124487 | orchestrator | Monday 23 March 2026 00:43:48 +0000 (0:00:00.203) 0:00:26.427 ********** 2026-03-23 00:43:58.124495 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.124503 | orchestrator | 2026-03-23 00:43:58.124511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:58.124520 | orchestrator | Monday 23 March 2026 00:43:48 +0000 (0:00:00.205) 0:00:26.633 ********** 2026-03-23 00:43:58.124528 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff) 2026-03-23 00:43:58.124537 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff) 2026-03-23 00:43:58.124545 | orchestrator | 2026-03-23 00:43:58.124553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:58.124561 | orchestrator | Monday 23 March 2026 00:43:49 +0000 (0:00:00.407) 0:00:27.041 ********** 2026-03-23 00:43:58.124569 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_77dd2124-92bc-4f46-82be-f9b228a0677e) 2026-03-23 00:43:58.124577 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_77dd2124-92bc-4f46-82be-f9b228a0677e) 2026-03-23 00:43:58.124584 | orchestrator | 2026-03-23 00:43:58.124607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:58.124615 | orchestrator | Monday 23 March 2026 00:43:49 +0000 (0:00:00.419) 0:00:27.460 ********** 2026-03-23 00:43:58.124623 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0331d52b-cef6-4339-b12c-c63469d626c6) 2026-03-23 00:43:58.124631 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0331d52b-cef6-4339-b12c-c63469d626c6) 2026-03-23 00:43:58.124639 | orchestrator | 2026-03-23 00:43:58.124646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:58.124654 | orchestrator | Monday 23 March 2026 00:43:49 +0000 (0:00:00.415) 0:00:27.875 ********** 2026-03-23 00:43:58.124662 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5) 2026-03-23 00:43:58.124711 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5) 2026-03-23 00:43:58.124720 | orchestrator | 2026-03-23 00:43:58.124728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:43:58.124736 | orchestrator | Monday 23 March 2026 00:43:50 +0000 (0:00:00.437) 0:00:28.313 ********** 2026-03-23 00:43:58.124744 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-23 00:43:58.124752 | orchestrator | 2026-03-23 00:43:58.124759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.124767 | orchestrator | Monday 23 March 2026 00:43:50 +0000 (0:00:00.323) 0:00:28.637 ********** 2026-03-23 00:43:58.124775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-23 00:43:58.124783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-23 00:43:58.124791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-23 00:43:58.124799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-23 00:43:58.124806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-23 00:43:58.124814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-23 00:43:58.124822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-23 00:43:58.124830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-23 00:43:58.124838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-23 00:43:58.124845 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-23 00:43:58.124853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-23 00:43:58.124861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-23 00:43:58.124868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-23 00:43:58.124876 | orchestrator | 2026-03-23 00:43:58.124884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.124892 | orchestrator | Monday 23 March 2026 00:43:51 +0000 (0:00:00.583) 0:00:29.220 ********** 2026-03-23 00:43:58.124899 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.124907 | orchestrator | 2026-03-23 00:43:58.124915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.124923 | orchestrator | Monday 23 March 2026 00:43:51 +0000 (0:00:00.177) 0:00:29.398 ********** 2026-03-23 00:43:58.124930 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.124938 | orchestrator | 2026-03-23 00:43:58.124946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.124954 | orchestrator | Monday 23 March 2026 00:43:51 +0000 (0:00:00.183) 0:00:29.581 ********** 2026-03-23 00:43:58.124961 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.124969 | orchestrator | 2026-03-23 00:43:58.124993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.125002 | orchestrator | Monday 23 March 2026 00:43:51 +0000 (0:00:00.190) 0:00:29.771 ********** 2026-03-23 00:43:58.125010 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.125018 | orchestrator | 2026-03-23 00:43:58.125025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.125033 | orchestrator | Monday 23 March 2026 00:43:52 +0000 (0:00:00.183) 0:00:29.955 ********** 2026-03-23 00:43:58.125041 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.125049 | orchestrator | 2026-03-23 00:43:58.125057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.125071 | orchestrator | Monday 23 March 2026 00:43:52 +0000 (0:00:00.192) 0:00:30.147 ********** 2026-03-23 00:43:58.125079 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.125087 | orchestrator | 2026-03-23 00:43:58.125095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.125103 | orchestrator | Monday 23 March 2026 00:43:52 +0000 (0:00:00.171) 0:00:30.319 ********** 2026-03-23 00:43:58.125111 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.125119 | orchestrator | 2026-03-23 00:43:58.125127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.125134 | orchestrator | Monday 23 March 2026 00:43:52 +0000 (0:00:00.183) 0:00:30.503 ********** 2026-03-23 00:43:58.125142 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.125150 | orchestrator | 2026-03-23 00:43:58.125158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.125170 | orchestrator | Monday 23 March 2026 00:43:52 +0000 (0:00:00.162) 0:00:30.666 ********** 2026-03-23 00:43:58.125178 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-23 00:43:58.125186 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-23 00:43:58.125194 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-23 00:43:58.125202 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-23 00:43:58.125210 | orchestrator | 2026-03-23 00:43:58.125218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.125226 | orchestrator | Monday 23 March 2026 00:43:53 +0000 (0:00:00.694) 0:00:31.360 ********** 2026-03-23 00:43:58.125233 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.125241 | orchestrator | 2026-03-23 00:43:58.125249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.125257 | orchestrator | Monday 23 March 2026 00:43:53 +0000 (0:00:00.160) 0:00:31.521 ********** 2026-03-23 00:43:58.125264 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.125272 | orchestrator | 2026-03-23 00:43:58.125280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.125288 | orchestrator | Monday 23 March 2026 00:43:53 +0000 (0:00:00.173) 0:00:31.694 ********** 2026-03-23 00:43:58.125296 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.125303 | orchestrator | 2026-03-23 00:43:58.125311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:43:58.125321 | orchestrator | Monday 23 March 2026 00:43:54 +0000 (0:00:00.495) 0:00:32.189 ********** 2026-03-23 00:43:58.125334 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.125347 | orchestrator | 2026-03-23 00:43:58.125360 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-23 00:43:58.125380 | orchestrator | Monday 23 March 2026 00:43:54 +0000 (0:00:00.182) 0:00:32.372 ********** 2026-03-23 00:43:58.125394 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.125407 | orchestrator | 2026-03-23 00:43:58.125419 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-23 00:43:58.125433 | orchestrator | Monday 23 March 2026 00:43:54 +0000 (0:00:00.153) 0:00:32.526 ********** 2026-03-23 00:43:58.125444 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1bf36823-02d4-5086-a00f-5e3efdd328af'}}) 2026-03-23 00:43:58.125452 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '92a7bb1e-121d-56dc-8fa7-94c9c65422a6'}}) 2026-03-23 00:43:58.125460 | orchestrator | 2026-03-23 00:43:58.125467 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-23 00:43:58.125475 | orchestrator | Monday 23 March 2026 00:43:54 +0000 (0:00:00.180) 0:00:32.706 ********** 2026-03-23 00:43:58.125484 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'}) 2026-03-23 00:43:58.125493 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'}) 2026-03-23 00:43:58.125510 | orchestrator | 2026-03-23 00:43:58.125518 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-23 00:43:58.125526 | orchestrator | Monday 23 March 2026 00:43:56 +0000 (0:00:01.867) 0:00:34.574 ********** 2026-03-23 00:43:58.125533 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:43:58.125543 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:43:58.125551 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:43:58.125559 | orchestrator | 2026-03-23 00:43:58.125567 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-23 00:43:58.125575 | orchestrator | Monday 23 March 2026 00:43:56 +0000 (0:00:00.139) 0:00:34.713 ********** 2026-03-23 00:43:58.125582 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'}) 2026-03-23 00:43:58.125598 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'}) 2026-03-23 00:44:03.198432 | orchestrator | 2026-03-23 00:44:03.198564 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-23 00:44:03.198589 | orchestrator | Monday 23 March 2026 00:43:58 +0000 (0:00:01.396) 0:00:36.110 ********** 2026-03-23 00:44:03.198609 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:03.198631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:03.198650 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.198671 | orchestrator | 2026-03-23 00:44:03.198718 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-23 00:44:03.198737 | orchestrator | Monday 23 March 2026 00:43:58 +0000 (0:00:00.141) 0:00:36.252 ********** 2026-03-23 00:44:03.198756 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.198774 | orchestrator | 2026-03-23 00:44:03.198793 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-23 00:44:03.198811 | orchestrator | Monday 23 March 2026 00:43:58 +0000 (0:00:00.119) 0:00:36.371 ********** 2026-03-23 00:44:03.198831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:03.198850 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:03.198868 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.198885 | orchestrator | 2026-03-23 00:44:03.198903 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-23 00:44:03.198922 | orchestrator | Monday 23 March 2026 00:43:58 +0000 (0:00:00.138) 0:00:36.509 ********** 2026-03-23 00:44:03.198941 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.198961 | orchestrator | 2026-03-23 00:44:03.198980 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-23 00:44:03.198999 | orchestrator | Monday 23 March 2026 00:43:58 +0000 (0:00:00.125) 0:00:36.635 ********** 2026-03-23 00:44:03.199018 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:03.199037 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:03.199085 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.199104 | orchestrator | 2026-03-23 00:44:03.199124 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-23 00:44:03.199143 | orchestrator | Monday 23 March 2026 00:43:58 +0000 (0:00:00.148) 0:00:36.783 ********** 2026-03-23 00:44:03.199161 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.199181 | orchestrator | 2026-03-23 00:44:03.199219 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-23 00:44:03.199238 | orchestrator | Monday 23 March 2026 00:43:59 +0000 (0:00:00.264) 0:00:37.047 ********** 2026-03-23 00:44:03.199258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:03.199277 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:03.199296 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.199314 | orchestrator | 2026-03-23 00:44:03.199331 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-23 00:44:03.199349 | orchestrator | Monday 23 March 2026 00:43:59 +0000 (0:00:00.143) 0:00:37.190 ********** 2026-03-23 00:44:03.199366 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:44:03.199385 | orchestrator | 2026-03-23 00:44:03.199403 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-23 00:44:03.199421 | orchestrator | Monday 23 March 2026 00:43:59 +0000 (0:00:00.116) 0:00:37.307 ********** 2026-03-23 00:44:03.199438 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:03.199456 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:03.199474 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.199491 | orchestrator | 2026-03-23 00:44:03.199508 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-23 00:44:03.199526 | orchestrator | Monday 23 March 2026 00:43:59 +0000 (0:00:00.145) 0:00:37.452 ********** 2026-03-23 00:44:03.199543 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:03.199561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:03.199579 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.199596 | orchestrator | 2026-03-23 00:44:03.199614 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-23 00:44:03.199656 | orchestrator | Monday 23 March 2026 00:43:59 +0000 (0:00:00.143) 0:00:37.596 ********** 2026-03-23 00:44:03.199675 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:03.199718 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:03.199736 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.199753 | orchestrator | 2026-03-23 00:44:03.199771 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-23 00:44:03.199789 | orchestrator | Monday 23 March 2026 00:43:59 +0000 (0:00:00.142) 0:00:37.739 ********** 2026-03-23 00:44:03.199806 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.199824 | orchestrator | 2026-03-23 00:44:03.199842 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-23 00:44:03.199860 | orchestrator | Monday 23 March 2026 00:43:59 +0000 (0:00:00.129) 0:00:37.869 ********** 2026-03-23 00:44:03.199889 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.199907 | orchestrator | 2026-03-23 00:44:03.199925 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-23 00:44:03.199949 | orchestrator | Monday 23 March 2026 00:44:00 +0000 (0:00:00.116) 0:00:37.986 ********** 2026-03-23 00:44:03.199967 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.199984 | orchestrator | 2026-03-23 00:44:03.200001 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-23 00:44:03.200019 | orchestrator | Monday 23 March 2026 00:44:00 +0000 (0:00:00.106) 0:00:38.092 ********** 2026-03-23 00:44:03.200037 | orchestrator | ok: [testbed-node-4] => { 2026-03-23 00:44:03.200054 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-23 00:44:03.200072 | orchestrator | } 2026-03-23 00:44:03.200090 | orchestrator | 2026-03-23 00:44:03.200107 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-23 00:44:03.200126 | orchestrator | Monday 23 March 2026 00:44:00 +0000 (0:00:00.115) 0:00:38.208 ********** 2026-03-23 00:44:03.200143 | orchestrator | ok: [testbed-node-4] => { 2026-03-23 00:44:03.200161 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-23 00:44:03.200180 | orchestrator | } 2026-03-23 00:44:03.200202 | orchestrator | 2026-03-23 00:44:03.200220 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-23 00:44:03.200238 | orchestrator | Monday 23 March 2026 00:44:00 +0000 (0:00:00.116) 0:00:38.324 ********** 2026-03-23 00:44:03.200256 | orchestrator | ok: [testbed-node-4] => { 2026-03-23 00:44:03.200274 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-23 00:44:03.200292 | orchestrator | } 2026-03-23 00:44:03.200310 | orchestrator | 2026-03-23 00:44:03.200328 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-23 00:44:03.200346 | orchestrator | Monday 23 March 2026 00:44:00 +0000 (0:00:00.129) 0:00:38.453 ********** 2026-03-23 00:44:03.200364 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:44:03.200381 | orchestrator | 2026-03-23 00:44:03.200399 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-23 00:44:03.200416 | orchestrator | Monday 23 March 2026 00:44:01 +0000 (0:00:00.638) 0:00:39.092 ********** 2026-03-23 00:44:03.200434 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:44:03.200451 | orchestrator | 2026-03-23 00:44:03.200469 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-23 00:44:03.200487 | orchestrator | Monday 23 March 2026 00:44:01 +0000 (0:00:00.519) 0:00:39.611 ********** 2026-03-23 00:44:03.200504 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:44:03.200522 | orchestrator | 2026-03-23 00:44:03.200539 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-23 00:44:03.200556 | orchestrator | Monday 23 March 2026 00:44:02 +0000 (0:00:00.510) 0:00:40.121 ********** 2026-03-23 00:44:03.200574 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:44:03.200591 | orchestrator | 2026-03-23 00:44:03.200608 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-23 00:44:03.200626 | orchestrator | Monday 23 March 2026 00:44:02 +0000 (0:00:00.133) 0:00:40.255 ********** 2026-03-23 00:44:03.200643 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.200661 | orchestrator | 2026-03-23 00:44:03.200753 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-23 00:44:03.200773 | orchestrator | Monday 23 March 2026 00:44:02 +0000 (0:00:00.098) 0:00:40.353 ********** 2026-03-23 00:44:03.200790 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.200808 | orchestrator | 2026-03-23 00:44:03.200826 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-23 00:44:03.200844 | orchestrator | Monday 23 March 2026 00:44:02 +0000 (0:00:00.105) 0:00:40.458 ********** 2026-03-23 00:44:03.200861 | orchestrator | ok: [testbed-node-4] => { 2026-03-23 00:44:03.200879 | orchestrator |  "vgs_report": { 2026-03-23 00:44:03.200897 | orchestrator |  "vg": [] 2026-03-23 00:44:03.200914 | orchestrator |  } 2026-03-23 00:44:03.200932 | orchestrator | } 2026-03-23 00:44:03.200961 | orchestrator | 2026-03-23 00:44:03.200979 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-23 00:44:03.200996 | orchestrator | Monday 23 March 2026 00:44:02 +0000 (0:00:00.118) 0:00:40.576 ********** 2026-03-23 00:44:03.201014 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.201032 | orchestrator | 2026-03-23 00:44:03.201049 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-23 00:44:03.201067 | orchestrator | Monday 23 March 2026 00:44:02 +0000 (0:00:00.122) 0:00:40.699 ********** 2026-03-23 00:44:03.201084 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.201102 | orchestrator | 2026-03-23 00:44:03.201119 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-23 00:44:03.201137 | orchestrator | Monday 23 March 2026 00:44:02 +0000 (0:00:00.122) 0:00:40.822 ********** 2026-03-23 00:44:03.201154 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.201172 | orchestrator | 2026-03-23 00:44:03.201189 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-23 00:44:03.201207 | orchestrator | Monday 23 March 2026 00:44:03 +0000 (0:00:00.130) 0:00:40.952 ********** 2026-03-23 00:44:03.201225 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:03.201242 | orchestrator | 2026-03-23 00:44:03.201272 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-23 00:44:07.466426 | orchestrator | Monday 23 March 2026 00:44:03 +0000 (0:00:00.124) 0:00:41.077 ********** 2026-03-23 00:44:07.466538 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466547 | orchestrator | 2026-03-23 00:44:07.466553 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-23 00:44:07.466557 | orchestrator | Monday 23 March 2026 00:44:03 +0000 (0:00:00.139) 0:00:41.217 ********** 2026-03-23 00:44:07.466561 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466566 | orchestrator | 2026-03-23 00:44:07.466570 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-23 00:44:07.466574 | orchestrator | Monday 23 March 2026 00:44:03 +0000 (0:00:00.261) 0:00:41.478 ********** 2026-03-23 00:44:07.466578 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466582 | orchestrator | 2026-03-23 00:44:07.466586 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-23 00:44:07.466590 | orchestrator | Monday 23 March 2026 00:44:03 +0000 (0:00:00.128) 0:00:41.607 ********** 2026-03-23 00:44:07.466594 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466598 | orchestrator | 2026-03-23 00:44:07.466602 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-23 00:44:07.466605 | orchestrator | Monday 23 March 2026 00:44:03 +0000 (0:00:00.122) 0:00:41.730 ********** 2026-03-23 00:44:07.466626 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466630 | orchestrator | 2026-03-23 00:44:07.466633 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-23 00:44:07.466637 | orchestrator | Monday 23 March 2026 00:44:03 +0000 (0:00:00.127) 0:00:41.857 ********** 2026-03-23 00:44:07.466641 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466645 | orchestrator | 2026-03-23 00:44:07.466649 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-23 00:44:07.466653 | orchestrator | Monday 23 March 2026 00:44:04 +0000 (0:00:00.126) 0:00:41.983 ********** 2026-03-23 00:44:07.466656 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466660 | orchestrator | 2026-03-23 00:44:07.466664 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-23 00:44:07.466668 | orchestrator | Monday 23 March 2026 00:44:04 +0000 (0:00:00.121) 0:00:42.105 ********** 2026-03-23 00:44:07.466672 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466676 | orchestrator | 2026-03-23 00:44:07.466733 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-23 00:44:07.466737 | orchestrator | Monday 23 March 2026 00:44:04 +0000 (0:00:00.139) 0:00:42.244 ********** 2026-03-23 00:44:07.466741 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466763 | orchestrator | 2026-03-23 00:44:07.466767 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-23 00:44:07.466771 | orchestrator | Monday 23 March 2026 00:44:04 +0000 (0:00:00.128) 0:00:42.372 ********** 2026-03-23 00:44:07.466775 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466779 | orchestrator | 2026-03-23 00:44:07.466782 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-23 00:44:07.466786 | orchestrator | Monday 23 March 2026 00:44:04 +0000 (0:00:00.128) 0:00:42.501 ********** 2026-03-23 00:44:07.466793 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:07.466802 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:07.466808 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466814 | orchestrator | 2026-03-23 00:44:07.466820 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-23 00:44:07.466827 | orchestrator | Monday 23 March 2026 00:44:04 +0000 (0:00:00.151) 0:00:42.652 ********** 2026-03-23 00:44:07.466833 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:07.466839 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:07.466846 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466853 | orchestrator | 2026-03-23 00:44:07.466857 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-23 00:44:07.466861 | orchestrator | Monday 23 March 2026 00:44:04 +0000 (0:00:00.132) 0:00:42.785 ********** 2026-03-23 00:44:07.466864 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:07.466868 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:07.466872 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466876 | orchestrator | 2026-03-23 00:44:07.466879 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-23 00:44:07.466883 | orchestrator | Monday 23 March 2026 00:44:05 +0000 (0:00:00.144) 0:00:42.930 ********** 2026-03-23 00:44:07.466887 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:07.466891 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:07.466895 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466899 | orchestrator | 2026-03-23 00:44:07.466918 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-23 00:44:07.466923 | orchestrator | Monday 23 March 2026 00:44:05 +0000 (0:00:00.304) 0:00:43.234 ********** 2026-03-23 00:44:07.466927 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:07.466931 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:07.466935 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466938 | orchestrator | 2026-03-23 00:44:07.466942 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-23 00:44:07.466946 | orchestrator | Monday 23 March 2026 00:44:05 +0000 (0:00:00.141) 0:00:43.376 ********** 2026-03-23 00:44:07.466957 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:07.466962 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:07.466967 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466971 | orchestrator | 2026-03-23 00:44:07.466976 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-23 00:44:07.466981 | orchestrator | Monday 23 March 2026 00:44:05 +0000 (0:00:00.137) 0:00:43.514 ********** 2026-03-23 00:44:07.466985 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:07.466990 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:07.466994 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.466999 | orchestrator | 2026-03-23 00:44:07.467003 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-23 00:44:07.467008 | orchestrator | Monday 23 March 2026 00:44:05 +0000 (0:00:00.135) 0:00:43.650 ********** 2026-03-23 00:44:07.467012 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:07.467017 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:07.467021 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.467025 | orchestrator | 2026-03-23 00:44:07.467030 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-23 00:44:07.467034 | orchestrator | Monday 23 March 2026 00:44:05 +0000 (0:00:00.139) 0:00:43.789 ********** 2026-03-23 00:44:07.467038 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:44:07.467043 | orchestrator | 2026-03-23 00:44:07.467048 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-23 00:44:07.467052 | orchestrator | Monday 23 March 2026 00:44:06 +0000 (0:00:00.523) 0:00:44.313 ********** 2026-03-23 00:44:07.467056 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:44:07.467061 | orchestrator | 2026-03-23 00:44:07.467065 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-23 00:44:07.467069 | orchestrator | Monday 23 March 2026 00:44:06 +0000 (0:00:00.514) 0:00:44.827 ********** 2026-03-23 00:44:07.467074 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:44:07.467078 | orchestrator | 2026-03-23 00:44:07.467083 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-23 00:44:07.467087 | orchestrator | Monday 23 March 2026 00:44:07 +0000 (0:00:00.145) 0:00:44.973 ********** 2026-03-23 00:44:07.467092 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'vg_name': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'}) 2026-03-23 00:44:07.467098 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'vg_name': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'}) 2026-03-23 00:44:07.467103 | orchestrator | 2026-03-23 00:44:07.467107 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-23 00:44:07.467111 | orchestrator | Monday 23 March 2026 00:44:07 +0000 (0:00:00.161) 0:00:45.135 ********** 2026-03-23 00:44:07.467116 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:07.467156 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:07.467161 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:07.467169 | orchestrator | 2026-03-23 00:44:07.467174 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-23 00:44:07.467178 | orchestrator | Monday 23 March 2026 00:44:07 +0000 (0:00:00.149) 0:00:45.285 ********** 2026-03-23 00:44:07.467183 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:07.467191 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:12.725289 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:12.725401 | orchestrator | 2026-03-23 00:44:12.725418 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-23 00:44:12.725431 | orchestrator | Monday 23 March 2026 00:44:07 +0000 (0:00:00.138) 0:00:45.423 ********** 2026-03-23 00:44:12.725443 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'})  2026-03-23 00:44:12.725457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'})  2026-03-23 00:44:12.725468 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:12.725479 | orchestrator | 2026-03-23 00:44:12.725490 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-23 00:44:12.725502 | orchestrator | Monday 23 March 2026 00:44:07 +0000 (0:00:00.150) 0:00:45.574 ********** 2026-03-23 00:44:12.725513 | orchestrator | ok: [testbed-node-4] => { 2026-03-23 00:44:12.725524 | orchestrator |  "lvm_report": { 2026-03-23 00:44:12.725535 | orchestrator |  "lv": [ 2026-03-23 00:44:12.725562 | orchestrator |  { 2026-03-23 00:44:12.725574 | orchestrator |  "lv_name": "osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af", 2026-03-23 00:44:12.725586 | orchestrator |  "vg_name": "ceph-1bf36823-02d4-5086-a00f-5e3efdd328af" 2026-03-23 00:44:12.725597 | orchestrator |  }, 2026-03-23 00:44:12.725608 | orchestrator |  { 2026-03-23 00:44:12.725619 | orchestrator |  "lv_name": "osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6", 2026-03-23 00:44:12.725630 | orchestrator |  "vg_name": "ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6" 2026-03-23 00:44:12.725641 | orchestrator |  } 2026-03-23 00:44:12.725652 | orchestrator |  ], 2026-03-23 00:44:12.725663 | orchestrator |  "pv": [ 2026-03-23 00:44:12.725674 | orchestrator |  { 2026-03-23 00:44:12.725734 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-23 00:44:12.725745 | orchestrator |  "vg_name": "ceph-1bf36823-02d4-5086-a00f-5e3efdd328af" 2026-03-23 00:44:12.725756 | orchestrator |  }, 2026-03-23 00:44:12.725767 | orchestrator |  { 2026-03-23 00:44:12.725778 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-23 00:44:12.725789 | orchestrator |  "vg_name": "ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6" 2026-03-23 00:44:12.725804 | orchestrator |  } 2026-03-23 00:44:12.725816 | orchestrator |  ] 2026-03-23 00:44:12.725828 | orchestrator |  } 2026-03-23 00:44:12.725841 | orchestrator | } 2026-03-23 00:44:12.725853 | orchestrator | 2026-03-23 00:44:12.725865 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-23 00:44:12.725877 | orchestrator | 2026-03-23 00:44:12.725889 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-23 00:44:12.725901 | orchestrator | Monday 23 March 2026 00:44:08 +0000 (0:00:00.393) 0:00:45.968 ********** 2026-03-23 00:44:12.725914 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-23 00:44:12.725926 | orchestrator | 2026-03-23 00:44:12.725939 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-23 00:44:12.725951 | orchestrator | Monday 23 March 2026 00:44:08 +0000 (0:00:00.221) 0:00:46.190 ********** 2026-03-23 00:44:12.725984 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:44:12.725997 | orchestrator | 2026-03-23 00:44:12.726009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726146 | orchestrator | Monday 23 March 2026 00:44:08 +0000 (0:00:00.189) 0:00:46.379 ********** 2026-03-23 00:44:12.726160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-23 00:44:12.726171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-23 00:44:12.726182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-23 00:44:12.726196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-23 00:44:12.726207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-23 00:44:12.726217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-23 00:44:12.726228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-23 00:44:12.726239 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-23 00:44:12.726249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-23 00:44:12.726260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-23 00:44:12.726271 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-23 00:44:12.726281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-23 00:44:12.726292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-23 00:44:12.726303 | orchestrator | 2026-03-23 00:44:12.726313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726324 | orchestrator | Monday 23 March 2026 00:44:08 +0000 (0:00:00.355) 0:00:46.735 ********** 2026-03-23 00:44:12.726335 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:12.726346 | orchestrator | 2026-03-23 00:44:12.726356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726367 | orchestrator | Monday 23 March 2026 00:44:09 +0000 (0:00:00.173) 0:00:46.908 ********** 2026-03-23 00:44:12.726378 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:12.726389 | orchestrator | 2026-03-23 00:44:12.726399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726430 | orchestrator | Monday 23 March 2026 00:44:09 +0000 (0:00:00.168) 0:00:47.077 ********** 2026-03-23 00:44:12.726442 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:12.726452 | orchestrator | 2026-03-23 00:44:12.726463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726474 | orchestrator | Monday 23 March 2026 00:44:09 +0000 (0:00:00.157) 0:00:47.234 ********** 2026-03-23 00:44:12.726485 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:12.726496 | orchestrator | 2026-03-23 00:44:12.726506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726517 | orchestrator | Monday 23 March 2026 00:44:09 +0000 (0:00:00.242) 0:00:47.477 ********** 2026-03-23 00:44:12.726528 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:12.726539 | orchestrator | 2026-03-23 00:44:12.726549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726560 | orchestrator | Monday 23 March 2026 00:44:09 +0000 (0:00:00.176) 0:00:47.654 ********** 2026-03-23 00:44:12.726571 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:12.726582 | orchestrator | 2026-03-23 00:44:12.726592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726612 | orchestrator | Monday 23 March 2026 00:44:10 +0000 (0:00:00.466) 0:00:48.121 ********** 2026-03-23 00:44:12.726623 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:12.726652 | orchestrator | 2026-03-23 00:44:12.726663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726674 | orchestrator | Monday 23 March 2026 00:44:10 +0000 (0:00:00.186) 0:00:48.307 ********** 2026-03-23 00:44:12.726708 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:12.726719 | orchestrator | 2026-03-23 00:44:12.726729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726740 | orchestrator | Monday 23 March 2026 00:44:10 +0000 (0:00:00.184) 0:00:48.492 ********** 2026-03-23 00:44:12.726751 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37) 2026-03-23 00:44:12.726763 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37) 2026-03-23 00:44:12.726773 | orchestrator | 2026-03-23 00:44:12.726784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726794 | orchestrator | Monday 23 March 2026 00:44:10 +0000 (0:00:00.371) 0:00:48.863 ********** 2026-03-23 00:44:12.726805 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_59b4a83f-d9c4-4d19-8941-518108c7531d) 2026-03-23 00:44:12.726816 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_59b4a83f-d9c4-4d19-8941-518108c7531d) 2026-03-23 00:44:12.726827 | orchestrator | 2026-03-23 00:44:12.726838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726848 | orchestrator | Monday 23 March 2026 00:44:11 +0000 (0:00:00.391) 0:00:49.254 ********** 2026-03-23 00:44:12.726859 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ff498ee2-e745-4049-bce7-87b4610f4b76) 2026-03-23 00:44:12.726870 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ff498ee2-e745-4049-bce7-87b4610f4b76) 2026-03-23 00:44:12.726881 | orchestrator | 2026-03-23 00:44:12.726891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726902 | orchestrator | Monday 23 March 2026 00:44:11 +0000 (0:00:00.388) 0:00:49.642 ********** 2026-03-23 00:44:12.726913 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a6dc9e4a-bb14-4275-87ca-e10d4388766d) 2026-03-23 00:44:12.726924 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a6dc9e4a-bb14-4275-87ca-e10d4388766d) 2026-03-23 00:44:12.726934 | orchestrator | 2026-03-23 00:44:12.726945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-23 00:44:12.726956 | orchestrator | Monday 23 March 2026 00:44:12 +0000 (0:00:00.385) 0:00:50.028 ********** 2026-03-23 00:44:12.726967 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-23 00:44:12.726977 | orchestrator | 2026-03-23 00:44:12.726988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:12.726999 | orchestrator | Monday 23 March 2026 00:44:12 +0000 (0:00:00.294) 0:00:50.323 ********** 2026-03-23 00:44:12.727009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-23 00:44:12.727020 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-23 00:44:12.727031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-23 00:44:12.727041 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-23 00:44:12.727052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-23 00:44:12.727063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-23 00:44:12.727073 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-23 00:44:12.727084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-23 00:44:12.727094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-23 00:44:12.727113 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-23 00:44:12.727124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-23 00:44:12.727141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-23 00:44:20.885387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-23 00:44:20.885523 | orchestrator | 2026-03-23 00:44:20.885550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:20.885571 | orchestrator | Monday 23 March 2026 00:44:12 +0000 (0:00:00.357) 0:00:50.680 ********** 2026-03-23 00:44:20.885592 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.885613 | orchestrator | 2026-03-23 00:44:20.885633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:20.885653 | orchestrator | Monday 23 March 2026 00:44:12 +0000 (0:00:00.174) 0:00:50.855 ********** 2026-03-23 00:44:20.885673 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.885724 | orchestrator | 2026-03-23 00:44:20.885744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:20.885764 | orchestrator | Monday 23 March 2026 00:44:13 +0000 (0:00:00.184) 0:00:51.039 ********** 2026-03-23 00:44:20.885784 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.885804 | orchestrator | 2026-03-23 00:44:20.885823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:20.885861 | orchestrator | Monday 23 March 2026 00:44:13 +0000 (0:00:00.477) 0:00:51.516 ********** 2026-03-23 00:44:20.885884 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.885906 | orchestrator | 2026-03-23 00:44:20.885927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:20.885947 | orchestrator | Monday 23 March 2026 00:44:13 +0000 (0:00:00.185) 0:00:51.702 ********** 2026-03-23 00:44:20.885967 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.885985 | orchestrator | 2026-03-23 00:44:20.886003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:20.886176 | orchestrator | Monday 23 March 2026 00:44:13 +0000 (0:00:00.177) 0:00:51.879 ********** 2026-03-23 00:44:20.886198 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.886210 | orchestrator | 2026-03-23 00:44:20.886221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:20.886232 | orchestrator | Monday 23 March 2026 00:44:14 +0000 (0:00:00.179) 0:00:52.059 ********** 2026-03-23 00:44:20.886243 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.886254 | orchestrator | 2026-03-23 00:44:20.886265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:20.886276 | orchestrator | Monday 23 March 2026 00:44:14 +0000 (0:00:00.187) 0:00:52.246 ********** 2026-03-23 00:44:20.886287 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.886298 | orchestrator | 2026-03-23 00:44:20.886308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:20.886320 | orchestrator | Monday 23 March 2026 00:44:14 +0000 (0:00:00.189) 0:00:52.436 ********** 2026-03-23 00:44:20.886331 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-23 00:44:20.886342 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-23 00:44:20.886354 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-23 00:44:20.886365 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-23 00:44:20.886376 | orchestrator | 2026-03-23 00:44:20.886386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:20.886397 | orchestrator | Monday 23 March 2026 00:44:15 +0000 (0:00:00.594) 0:00:53.030 ********** 2026-03-23 00:44:20.886408 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.886419 | orchestrator | 2026-03-23 00:44:20.886430 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:20.886466 | orchestrator | Monday 23 March 2026 00:44:15 +0000 (0:00:00.183) 0:00:53.214 ********** 2026-03-23 00:44:20.886485 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.886504 | orchestrator | 2026-03-23 00:44:20.886522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:20.886533 | orchestrator | Monday 23 March 2026 00:44:15 +0000 (0:00:00.191) 0:00:53.406 ********** 2026-03-23 00:44:20.886544 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.886554 | orchestrator | 2026-03-23 00:44:20.886565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-23 00:44:20.886576 | orchestrator | Monday 23 March 2026 00:44:15 +0000 (0:00:00.188) 0:00:53.595 ********** 2026-03-23 00:44:20.886586 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.886597 | orchestrator | 2026-03-23 00:44:20.886607 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-23 00:44:20.886618 | orchestrator | Monday 23 March 2026 00:44:15 +0000 (0:00:00.201) 0:00:53.797 ********** 2026-03-23 00:44:20.886629 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.886639 | orchestrator | 2026-03-23 00:44:20.886650 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-23 00:44:20.886661 | orchestrator | Monday 23 March 2026 00:44:16 +0000 (0:00:00.250) 0:00:54.047 ********** 2026-03-23 00:44:20.886671 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b7e7e409-387b-5e35-af60-96efea6ce8aa'}}) 2026-03-23 00:44:20.886759 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6fa6fe99-be0d-55bf-a5b2-66c7db596be7'}}) 2026-03-23 00:44:20.886780 | orchestrator | 2026-03-23 00:44:20.886797 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-23 00:44:20.886809 | orchestrator | Monday 23 March 2026 00:44:16 +0000 (0:00:00.183) 0:00:54.231 ********** 2026-03-23 00:44:20.886821 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'}) 2026-03-23 00:44:20.886833 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'}) 2026-03-23 00:44:20.886844 | orchestrator | 2026-03-23 00:44:20.886854 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-23 00:44:20.886886 | orchestrator | Monday 23 March 2026 00:44:18 +0000 (0:00:01.955) 0:00:56.186 ********** 2026-03-23 00:44:20.886896 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:20.886907 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:20.886917 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.886926 | orchestrator | 2026-03-23 00:44:20.886936 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-23 00:44:20.886945 | orchestrator | Monday 23 March 2026 00:44:18 +0000 (0:00:00.152) 0:00:56.338 ********** 2026-03-23 00:44:20.886955 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'}) 2026-03-23 00:44:20.886964 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'}) 2026-03-23 00:44:20.886974 | orchestrator | 2026-03-23 00:44:20.886984 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-23 00:44:20.886993 | orchestrator | Monday 23 March 2026 00:44:19 +0000 (0:00:01.317) 0:00:57.655 ********** 2026-03-23 00:44:20.887002 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:20.887024 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:20.887034 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.887044 | orchestrator | 2026-03-23 00:44:20.887053 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-23 00:44:20.887063 | orchestrator | Monday 23 March 2026 00:44:19 +0000 (0:00:00.128) 0:00:57.784 ********** 2026-03-23 00:44:20.887073 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.887089 | orchestrator | 2026-03-23 00:44:20.887107 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-23 00:44:20.887122 | orchestrator | Monday 23 March 2026 00:44:20 +0000 (0:00:00.127) 0:00:57.912 ********** 2026-03-23 00:44:20.887140 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:20.887158 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:20.887176 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.887194 | orchestrator | 2026-03-23 00:44:20.887211 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-23 00:44:20.887229 | orchestrator | Monday 23 March 2026 00:44:20 +0000 (0:00:00.133) 0:00:58.045 ********** 2026-03-23 00:44:20.887247 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.887263 | orchestrator | 2026-03-23 00:44:20.887280 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-23 00:44:20.887310 | orchestrator | Monday 23 March 2026 00:44:20 +0000 (0:00:00.124) 0:00:58.169 ********** 2026-03-23 00:44:20.887328 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:20.887346 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:20.887362 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.887377 | orchestrator | 2026-03-23 00:44:20.887393 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-23 00:44:20.887409 | orchestrator | Monday 23 March 2026 00:44:20 +0000 (0:00:00.144) 0:00:58.314 ********** 2026-03-23 00:44:20.887424 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.887439 | orchestrator | 2026-03-23 00:44:20.887453 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-23 00:44:20.887468 | orchestrator | Monday 23 March 2026 00:44:20 +0000 (0:00:00.136) 0:00:58.451 ********** 2026-03-23 00:44:20.887483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:20.887498 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:20.887512 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:20.887528 | orchestrator | 2026-03-23 00:44:20.887544 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-23 00:44:20.887560 | orchestrator | Monday 23 March 2026 00:44:20 +0000 (0:00:00.137) 0:00:58.588 ********** 2026-03-23 00:44:20.887578 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:44:20.887594 | orchestrator | 2026-03-23 00:44:20.887610 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-23 00:44:20.887626 | orchestrator | Monday 23 March 2026 00:44:20 +0000 (0:00:00.119) 0:00:58.708 ********** 2026-03-23 00:44:20.887656 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:26.926231 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:26.926307 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926314 | orchestrator | 2026-03-23 00:44:26.926320 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-23 00:44:26.926326 | orchestrator | Monday 23 March 2026 00:44:21 +0000 (0:00:00.308) 0:00:59.017 ********** 2026-03-23 00:44:26.926331 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:26.926336 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:26.926341 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926345 | orchestrator | 2026-03-23 00:44:26.926362 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-23 00:44:26.926367 | orchestrator | Monday 23 March 2026 00:44:21 +0000 (0:00:00.175) 0:00:59.193 ********** 2026-03-23 00:44:26.926371 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:26.926376 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:26.926380 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926385 | orchestrator | 2026-03-23 00:44:26.926390 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-23 00:44:26.926394 | orchestrator | Monday 23 March 2026 00:44:21 +0000 (0:00:00.146) 0:00:59.340 ********** 2026-03-23 00:44:26.926399 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926403 | orchestrator | 2026-03-23 00:44:26.926408 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-23 00:44:26.926413 | orchestrator | Monday 23 March 2026 00:44:21 +0000 (0:00:00.122) 0:00:59.462 ********** 2026-03-23 00:44:26.926417 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926422 | orchestrator | 2026-03-23 00:44:26.926426 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-23 00:44:26.926431 | orchestrator | Monday 23 March 2026 00:44:21 +0000 (0:00:00.127) 0:00:59.590 ********** 2026-03-23 00:44:26.926436 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926441 | orchestrator | 2026-03-23 00:44:26.926445 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-23 00:44:26.926450 | orchestrator | Monday 23 March 2026 00:44:21 +0000 (0:00:00.114) 0:00:59.704 ********** 2026-03-23 00:44:26.926455 | orchestrator | ok: [testbed-node-5] => { 2026-03-23 00:44:26.926460 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-23 00:44:26.926464 | orchestrator | } 2026-03-23 00:44:26.926469 | orchestrator | 2026-03-23 00:44:26.926474 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-23 00:44:26.926479 | orchestrator | Monday 23 March 2026 00:44:21 +0000 (0:00:00.116) 0:00:59.821 ********** 2026-03-23 00:44:26.926483 | orchestrator | ok: [testbed-node-5] => { 2026-03-23 00:44:26.926488 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-23 00:44:26.926492 | orchestrator | } 2026-03-23 00:44:26.926497 | orchestrator | 2026-03-23 00:44:26.926501 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-23 00:44:26.926506 | orchestrator | Monday 23 March 2026 00:44:22 +0000 (0:00:00.124) 0:00:59.945 ********** 2026-03-23 00:44:26.926510 | orchestrator | ok: [testbed-node-5] => { 2026-03-23 00:44:26.926515 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-23 00:44:26.926520 | orchestrator | } 2026-03-23 00:44:26.926524 | orchestrator | 2026-03-23 00:44:26.926529 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-23 00:44:26.926534 | orchestrator | Monday 23 March 2026 00:44:22 +0000 (0:00:00.119) 0:01:00.065 ********** 2026-03-23 00:44:26.926552 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:44:26.926557 | orchestrator | 2026-03-23 00:44:26.926561 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-23 00:44:26.926566 | orchestrator | Monday 23 March 2026 00:44:22 +0000 (0:00:00.524) 0:01:00.589 ********** 2026-03-23 00:44:26.926570 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:44:26.926575 | orchestrator | 2026-03-23 00:44:26.926579 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-23 00:44:26.926584 | orchestrator | Monday 23 March 2026 00:44:23 +0000 (0:00:00.505) 0:01:01.095 ********** 2026-03-23 00:44:26.926588 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:44:26.926593 | orchestrator | 2026-03-23 00:44:26.926598 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-23 00:44:26.926602 | orchestrator | Monday 23 March 2026 00:44:23 +0000 (0:00:00.535) 0:01:01.630 ********** 2026-03-23 00:44:26.926607 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:44:26.926611 | orchestrator | 2026-03-23 00:44:26.926616 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-23 00:44:26.926620 | orchestrator | Monday 23 March 2026 00:44:24 +0000 (0:00:00.301) 0:01:01.931 ********** 2026-03-23 00:44:26.926625 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926629 | orchestrator | 2026-03-23 00:44:26.926634 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-23 00:44:26.926638 | orchestrator | Monday 23 March 2026 00:44:24 +0000 (0:00:00.106) 0:01:02.037 ********** 2026-03-23 00:44:26.926643 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926647 | orchestrator | 2026-03-23 00:44:26.926652 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-23 00:44:26.926656 | orchestrator | Monday 23 March 2026 00:44:24 +0000 (0:00:00.110) 0:01:02.148 ********** 2026-03-23 00:44:26.926661 | orchestrator | ok: [testbed-node-5] => { 2026-03-23 00:44:26.926665 | orchestrator |  "vgs_report": { 2026-03-23 00:44:26.926670 | orchestrator |  "vg": [] 2026-03-23 00:44:26.926720 | orchestrator |  } 2026-03-23 00:44:26.926725 | orchestrator | } 2026-03-23 00:44:26.926730 | orchestrator | 2026-03-23 00:44:26.926734 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-23 00:44:26.926739 | orchestrator | Monday 23 March 2026 00:44:24 +0000 (0:00:00.142) 0:01:02.291 ********** 2026-03-23 00:44:26.926744 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926748 | orchestrator | 2026-03-23 00:44:26.926753 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-23 00:44:26.926758 | orchestrator | Monday 23 March 2026 00:44:24 +0000 (0:00:00.137) 0:01:02.428 ********** 2026-03-23 00:44:26.926762 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926767 | orchestrator | 2026-03-23 00:44:26.926771 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-23 00:44:26.926776 | orchestrator | Monday 23 March 2026 00:44:24 +0000 (0:00:00.146) 0:01:02.574 ********** 2026-03-23 00:44:26.926781 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926786 | orchestrator | 2026-03-23 00:44:26.926791 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-23 00:44:26.926802 | orchestrator | Monday 23 March 2026 00:44:24 +0000 (0:00:00.136) 0:01:02.711 ********** 2026-03-23 00:44:26.926808 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926813 | orchestrator | 2026-03-23 00:44:26.926818 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-23 00:44:26.926823 | orchestrator | Monday 23 March 2026 00:44:24 +0000 (0:00:00.144) 0:01:02.855 ********** 2026-03-23 00:44:26.926828 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926834 | orchestrator | 2026-03-23 00:44:26.926839 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-23 00:44:26.926844 | orchestrator | Monday 23 March 2026 00:44:25 +0000 (0:00:00.136) 0:01:02.991 ********** 2026-03-23 00:44:26.926849 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926858 | orchestrator | 2026-03-23 00:44:26.926863 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-23 00:44:26.926869 | orchestrator | Monday 23 March 2026 00:44:25 +0000 (0:00:00.124) 0:01:03.116 ********** 2026-03-23 00:44:26.926874 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926879 | orchestrator | 2026-03-23 00:44:26.926884 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-23 00:44:26.926889 | orchestrator | Monday 23 March 2026 00:44:25 +0000 (0:00:00.133) 0:01:03.250 ********** 2026-03-23 00:44:26.926894 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926900 | orchestrator | 2026-03-23 00:44:26.926905 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-23 00:44:26.926910 | orchestrator | Monday 23 March 2026 00:44:25 +0000 (0:00:00.129) 0:01:03.379 ********** 2026-03-23 00:44:26.926915 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926920 | orchestrator | 2026-03-23 00:44:26.926926 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-23 00:44:26.926931 | orchestrator | Monday 23 March 2026 00:44:25 +0000 (0:00:00.330) 0:01:03.709 ********** 2026-03-23 00:44:26.926936 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926942 | orchestrator | 2026-03-23 00:44:26.926947 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-23 00:44:26.926952 | orchestrator | Monday 23 March 2026 00:44:25 +0000 (0:00:00.155) 0:01:03.865 ********** 2026-03-23 00:44:26.926957 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926962 | orchestrator | 2026-03-23 00:44:26.926967 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-23 00:44:26.926973 | orchestrator | Monday 23 March 2026 00:44:26 +0000 (0:00:00.143) 0:01:04.008 ********** 2026-03-23 00:44:26.926978 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.926983 | orchestrator | 2026-03-23 00:44:26.926988 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-23 00:44:26.927011 | orchestrator | Monday 23 March 2026 00:44:26 +0000 (0:00:00.140) 0:01:04.149 ********** 2026-03-23 00:44:26.927017 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.927022 | orchestrator | 2026-03-23 00:44:26.927027 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-23 00:44:26.927032 | orchestrator | Monday 23 March 2026 00:44:26 +0000 (0:00:00.148) 0:01:04.298 ********** 2026-03-23 00:44:26.927038 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.927043 | orchestrator | 2026-03-23 00:44:26.927048 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-23 00:44:26.927053 | orchestrator | Monday 23 March 2026 00:44:26 +0000 (0:00:00.126) 0:01:04.424 ********** 2026-03-23 00:44:26.927058 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:26.927064 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:26.927069 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.927075 | orchestrator | 2026-03-23 00:44:26.927080 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-23 00:44:26.927085 | orchestrator | Monday 23 March 2026 00:44:26 +0000 (0:00:00.158) 0:01:04.583 ********** 2026-03-23 00:44:26.927090 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:26.927096 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:26.927101 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:26.927107 | orchestrator | 2026-03-23 00:44:26.927112 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-23 00:44:26.927121 | orchestrator | Monday 23 March 2026 00:44:26 +0000 (0:00:00.151) 0:01:04.735 ********** 2026-03-23 00:44:26.927130 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:30.004793 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:30.004883 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:30.004896 | orchestrator | 2026-03-23 00:44:30.004907 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-23 00:44:30.004918 | orchestrator | Monday 23 March 2026 00:44:27 +0000 (0:00:00.164) 0:01:04.899 ********** 2026-03-23 00:44:30.004927 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:30.004952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:30.004961 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:30.004970 | orchestrator | 2026-03-23 00:44:30.004978 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-23 00:44:30.004987 | orchestrator | Monday 23 March 2026 00:44:27 +0000 (0:00:00.162) 0:01:05.061 ********** 2026-03-23 00:44:30.004996 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:30.005004 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:30.005013 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:30.005022 | orchestrator | 2026-03-23 00:44:30.005031 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-23 00:44:30.005039 | orchestrator | Monday 23 March 2026 00:44:27 +0000 (0:00:00.148) 0:01:05.210 ********** 2026-03-23 00:44:30.005048 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:30.005056 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:30.005065 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:30.005073 | orchestrator | 2026-03-23 00:44:30.005082 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-23 00:44:30.005090 | orchestrator | Monday 23 March 2026 00:44:27 +0000 (0:00:00.136) 0:01:05.346 ********** 2026-03-23 00:44:30.005099 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:30.005107 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:30.005116 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:30.005124 | orchestrator | 2026-03-23 00:44:30.005133 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-23 00:44:30.005141 | orchestrator | Monday 23 March 2026 00:44:27 +0000 (0:00:00.365) 0:01:05.712 ********** 2026-03-23 00:44:30.005150 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:30.005158 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:30.005167 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:30.005194 | orchestrator | 2026-03-23 00:44:30.005203 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-23 00:44:30.005212 | orchestrator | Monday 23 March 2026 00:44:27 +0000 (0:00:00.156) 0:01:05.868 ********** 2026-03-23 00:44:30.005220 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:44:30.005230 | orchestrator | 2026-03-23 00:44:30.005238 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-23 00:44:30.005247 | orchestrator | Monday 23 March 2026 00:44:28 +0000 (0:00:00.551) 0:01:06.420 ********** 2026-03-23 00:44:30.005255 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:44:30.005264 | orchestrator | 2026-03-23 00:44:30.005273 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-23 00:44:30.005281 | orchestrator | Monday 23 March 2026 00:44:29 +0000 (0:00:00.521) 0:01:06.941 ********** 2026-03-23 00:44:30.005290 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:44:30.005300 | orchestrator | 2026-03-23 00:44:30.005310 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-23 00:44:30.005320 | orchestrator | Monday 23 March 2026 00:44:29 +0000 (0:00:00.153) 0:01:07.094 ********** 2026-03-23 00:44:30.005330 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'vg_name': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'}) 2026-03-23 00:44:30.005341 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'vg_name': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'}) 2026-03-23 00:44:30.005351 | orchestrator | 2026-03-23 00:44:30.005361 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-23 00:44:30.005371 | orchestrator | Monday 23 March 2026 00:44:29 +0000 (0:00:00.165) 0:01:07.260 ********** 2026-03-23 00:44:30.005397 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:30.005408 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:30.005418 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:30.005428 | orchestrator | 2026-03-23 00:44:30.005438 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-23 00:44:30.005447 | orchestrator | Monday 23 March 2026 00:44:29 +0000 (0:00:00.157) 0:01:07.417 ********** 2026-03-23 00:44:30.005458 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:30.005468 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:30.005478 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:30.005488 | orchestrator | 2026-03-23 00:44:30.005498 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-23 00:44:30.005508 | orchestrator | Monday 23 March 2026 00:44:29 +0000 (0:00:00.157) 0:01:07.574 ********** 2026-03-23 00:44:30.005518 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'})  2026-03-23 00:44:30.005526 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'})  2026-03-23 00:44:30.005535 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:30.005543 | orchestrator | 2026-03-23 00:44:30.005552 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-23 00:44:30.005560 | orchestrator | Monday 23 March 2026 00:44:29 +0000 (0:00:00.151) 0:01:07.725 ********** 2026-03-23 00:44:30.005569 | orchestrator | ok: [testbed-node-5] => { 2026-03-23 00:44:30.005577 | orchestrator |  "lvm_report": { 2026-03-23 00:44:30.005586 | orchestrator |  "lv": [ 2026-03-23 00:44:30.005602 | orchestrator |  { 2026-03-23 00:44:30.005610 | orchestrator |  "lv_name": "osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7", 2026-03-23 00:44:30.005619 | orchestrator |  "vg_name": "ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7" 2026-03-23 00:44:30.005628 | orchestrator |  }, 2026-03-23 00:44:30.005636 | orchestrator |  { 2026-03-23 00:44:30.005645 | orchestrator |  "lv_name": "osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa", 2026-03-23 00:44:30.005653 | orchestrator |  "vg_name": "ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa" 2026-03-23 00:44:30.005662 | orchestrator |  } 2026-03-23 00:44:30.005671 | orchestrator |  ], 2026-03-23 00:44:30.005679 | orchestrator |  "pv": [ 2026-03-23 00:44:30.005728 | orchestrator |  { 2026-03-23 00:44:30.005737 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-23 00:44:30.005746 | orchestrator |  "vg_name": "ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa" 2026-03-23 00:44:30.005754 | orchestrator |  }, 2026-03-23 00:44:30.005763 | orchestrator |  { 2026-03-23 00:44:30.005771 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-23 00:44:30.005780 | orchestrator |  "vg_name": "ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7" 2026-03-23 00:44:30.005788 | orchestrator |  } 2026-03-23 00:44:30.005797 | orchestrator |  ] 2026-03-23 00:44:30.005805 | orchestrator |  } 2026-03-23 00:44:30.005814 | orchestrator | } 2026-03-23 00:44:30.005823 | orchestrator | 2026-03-23 00:44:30.005831 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:44:30.005840 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-23 00:44:30.005849 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-23 00:44:30.005858 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-23 00:44:30.005866 | orchestrator | 2026-03-23 00:44:30.005875 | orchestrator | 2026-03-23 00:44:30.005883 | orchestrator | 2026-03-23 00:44:30.005899 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:44:30.005908 | orchestrator | Monday 23 March 2026 00:44:29 +0000 (0:00:00.144) 0:01:07.870 ********** 2026-03-23 00:44:30.005917 | orchestrator | =============================================================================== 2026-03-23 00:44:30.005925 | orchestrator | Create block VGs -------------------------------------------------------- 5.76s 2026-03-23 00:44:30.005934 | orchestrator | Create block LVs -------------------------------------------------------- 4.24s 2026-03-23 00:44:30.005942 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.82s 2026-03-23 00:44:30.005951 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.62s 2026-03-23 00:44:30.005959 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.60s 2026-03-23 00:44:30.005967 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.57s 2026-03-23 00:44:30.005976 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2026-03-23 00:44:30.005984 | orchestrator | Add known partitions to the list of available block devices ------------- 1.32s 2026-03-23 00:44:30.005999 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2026-03-23 00:44:30.379561 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-03-23 00:44:30.379644 | orchestrator | Print LVM report data --------------------------------------------------- 0.86s 2026-03-23 00:44:30.379654 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.73s 2026-03-23 00:44:30.379662 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-03-23 00:44:30.379669 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-03-23 00:44:30.379740 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.67s 2026-03-23 00:44:30.379749 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.66s 2026-03-23 00:44:30.379770 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.66s 2026-03-23 00:44:30.379778 | orchestrator | Get initial list of available block devices ----------------------------- 0.66s 2026-03-23 00:44:30.379784 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.65s 2026-03-23 00:44:30.379791 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-03-23 00:44:42.069229 | orchestrator | 2026-03-23 00:44:42 | INFO  | Prepare task for execution of facts. 2026-03-23 00:44:42.151005 | orchestrator | 2026-03-23 00:44:42 | INFO  | Task 3307a8d9-76dc-4d9d-83f1-6390c1efddc1 (facts) was prepared for execution. 2026-03-23 00:44:42.151102 | orchestrator | 2026-03-23 00:44:42 | INFO  | It takes a moment until task 3307a8d9-76dc-4d9d-83f1-6390c1efddc1 (facts) has been started and output is visible here. 2026-03-23 00:44:53.695835 | orchestrator | 2026-03-23 00:44:53.695962 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-23 00:44:53.695977 | orchestrator | 2026-03-23 00:44:53.695987 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-23 00:44:53.695996 | orchestrator | Monday 23 March 2026 00:44:45 +0000 (0:00:00.364) 0:00:00.364 ********** 2026-03-23 00:44:53.696004 | orchestrator | ok: [testbed-manager] 2026-03-23 00:44:53.696014 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:44:53.696022 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:44:53.696031 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:44:53.696038 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:44:53.696046 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:44:53.696054 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:44:53.696062 | orchestrator | 2026-03-23 00:44:53.696070 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-23 00:44:53.696078 | orchestrator | Monday 23 March 2026 00:44:46 +0000 (0:00:01.302) 0:00:01.666 ********** 2026-03-23 00:44:53.696087 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:44:53.696096 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:44:53.696104 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:44:53.696112 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:44:53.696119 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:44:53.696127 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:53.696135 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:53.696143 | orchestrator | 2026-03-23 00:44:53.696151 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-23 00:44:53.696159 | orchestrator | 2026-03-23 00:44:53.696167 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-23 00:44:53.696175 | orchestrator | Monday 23 March 2026 00:44:48 +0000 (0:00:01.239) 0:00:02.906 ********** 2026-03-23 00:44:53.696183 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:44:53.696191 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:44:53.696199 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:44:53.696207 | orchestrator | ok: [testbed-manager] 2026-03-23 00:44:53.696214 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:44:53.696222 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:44:53.696230 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:44:53.696238 | orchestrator | 2026-03-23 00:44:53.696246 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-23 00:44:53.696254 | orchestrator | 2026-03-23 00:44:53.696262 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-23 00:44:53.696270 | orchestrator | Monday 23 March 2026 00:44:53 +0000 (0:00:04.966) 0:00:07.873 ********** 2026-03-23 00:44:53.696278 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:44:53.696286 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:44:53.696326 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:44:53.696335 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:44:53.696345 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:44:53.696354 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:44:53.696363 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:44:53.696372 | orchestrator | 2026-03-23 00:44:53.696381 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:44:53.696391 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:44:53.696400 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:44:53.696409 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:44:53.696417 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:44:53.696425 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:44:53.696432 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:44:53.696440 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:44:53.696448 | orchestrator | 2026-03-23 00:44:53.696456 | orchestrator | 2026-03-23 00:44:53.696464 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:44:53.696472 | orchestrator | Monday 23 March 2026 00:44:53 +0000 (0:00:00.434) 0:00:08.307 ********** 2026-03-23 00:44:53.696480 | orchestrator | =============================================================================== 2026-03-23 00:44:53.696487 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.97s 2026-03-23 00:44:53.696495 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.30s 2026-03-23 00:44:53.696519 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2026-03-23 00:44:53.696527 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.43s 2026-03-23 00:45:04.981532 | orchestrator | 2026-03-23 00:45:04 | INFO  | Prepare task for execution of frr. 2026-03-23 00:45:05.054366 | orchestrator | 2026-03-23 00:45:05 | INFO  | Task 6ba2da8a-57fc-4613-919b-9920d3653c1e (frr) was prepared for execution. 2026-03-23 00:45:05.054454 | orchestrator | 2026-03-23 00:45:05 | INFO  | It takes a moment until task 6ba2da8a-57fc-4613-919b-9920d3653c1e (frr) has been started and output is visible here. 2026-03-23 00:45:29.110384 | orchestrator | 2026-03-23 00:45:29.110498 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-23 00:45:29.110519 | orchestrator | 2026-03-23 00:45:29.110535 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-23 00:45:29.110550 | orchestrator | Monday 23 March 2026 00:45:08 +0000 (0:00:00.303) 0:00:00.303 ********** 2026-03-23 00:45:29.110563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-23 00:45:29.110578 | orchestrator | 2026-03-23 00:45:29.110590 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-23 00:45:29.110604 | orchestrator | Monday 23 March 2026 00:45:08 +0000 (0:00:00.200) 0:00:00.504 ********** 2026-03-23 00:45:29.110616 | orchestrator | changed: [testbed-manager] 2026-03-23 00:45:29.110630 | orchestrator | 2026-03-23 00:45:29.110644 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-23 00:45:29.110690 | orchestrator | Monday 23 March 2026 00:45:09 +0000 (0:00:01.501) 0:00:02.005 ********** 2026-03-23 00:45:29.110704 | orchestrator | changed: [testbed-manager] 2026-03-23 00:45:29.110743 | orchestrator | 2026-03-23 00:45:29.110759 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-23 00:45:29.110773 | orchestrator | Monday 23 March 2026 00:45:19 +0000 (0:00:09.347) 0:00:11.353 ********** 2026-03-23 00:45:29.110788 | orchestrator | ok: [testbed-manager] 2026-03-23 00:45:29.110803 | orchestrator | 2026-03-23 00:45:29.110818 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-23 00:45:29.110831 | orchestrator | Monday 23 March 2026 00:45:20 +0000 (0:00:00.997) 0:00:12.351 ********** 2026-03-23 00:45:29.110846 | orchestrator | changed: [testbed-manager] 2026-03-23 00:45:29.110861 | orchestrator | 2026-03-23 00:45:29.110876 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-23 00:45:29.110891 | orchestrator | Monday 23 March 2026 00:45:21 +0000 (0:00:00.936) 0:00:13.287 ********** 2026-03-23 00:45:29.110905 | orchestrator | ok: [testbed-manager] 2026-03-23 00:45:29.110919 | orchestrator | 2026-03-23 00:45:29.110933 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-23 00:45:29.110946 | orchestrator | Monday 23 March 2026 00:45:22 +0000 (0:00:01.165) 0:00:14.453 ********** 2026-03-23 00:45:29.110962 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:45:29.110977 | orchestrator | 2026-03-23 00:45:29.110993 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-23 00:45:29.111008 | orchestrator | Monday 23 March 2026 00:45:22 +0000 (0:00:00.153) 0:00:14.607 ********** 2026-03-23 00:45:29.111022 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:45:29.111035 | orchestrator | 2026-03-23 00:45:29.111050 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-23 00:45:29.111065 | orchestrator | Monday 23 March 2026 00:45:22 +0000 (0:00:00.283) 0:00:14.890 ********** 2026-03-23 00:45:29.111081 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:45:29.111096 | orchestrator | 2026-03-23 00:45:29.111110 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-23 00:45:29.111124 | orchestrator | Monday 23 March 2026 00:45:23 +0000 (0:00:00.154) 0:00:15.044 ********** 2026-03-23 00:45:29.111138 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:45:29.111150 | orchestrator | 2026-03-23 00:45:29.111165 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-23 00:45:29.111178 | orchestrator | Monday 23 March 2026 00:45:23 +0000 (0:00:00.140) 0:00:15.185 ********** 2026-03-23 00:45:29.111190 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:45:29.111204 | orchestrator | 2026-03-23 00:45:29.111218 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-23 00:45:29.111233 | orchestrator | Monday 23 March 2026 00:45:23 +0000 (0:00:00.153) 0:00:15.339 ********** 2026-03-23 00:45:29.111246 | orchestrator | changed: [testbed-manager] 2026-03-23 00:45:29.111260 | orchestrator | 2026-03-23 00:45:29.111273 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-23 00:45:29.111289 | orchestrator | Monday 23 March 2026 00:45:24 +0000 (0:00:00.935) 0:00:16.274 ********** 2026-03-23 00:45:29.111303 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-23 00:45:29.111318 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-23 00:45:29.111332 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-23 00:45:29.111346 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-23 00:45:29.111359 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-23 00:45:29.111373 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-23 00:45:29.111398 | orchestrator | 2026-03-23 00:45:29.111412 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-23 00:45:29.111443 | orchestrator | Monday 23 March 2026 00:45:26 +0000 (0:00:02.223) 0:00:18.497 ********** 2026-03-23 00:45:29.111457 | orchestrator | ok: [testbed-manager] 2026-03-23 00:45:29.111471 | orchestrator | 2026-03-23 00:45:29.111485 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-23 00:45:29.111499 | orchestrator | Monday 23 March 2026 00:45:27 +0000 (0:00:01.084) 0:00:19.581 ********** 2026-03-23 00:45:29.111514 | orchestrator | changed: [testbed-manager] 2026-03-23 00:45:29.111528 | orchestrator | 2026-03-23 00:45:29.111542 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:45:29.111555 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-23 00:45:29.111568 | orchestrator | 2026-03-23 00:45:29.111583 | orchestrator | 2026-03-23 00:45:29.111618 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:45:29.111632 | orchestrator | Monday 23 March 2026 00:45:28 +0000 (0:00:01.314) 0:00:20.895 ********** 2026-03-23 00:45:29.111646 | orchestrator | =============================================================================== 2026-03-23 00:45:29.111659 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.35s 2026-03-23 00:45:29.111674 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.22s 2026-03-23 00:45:29.111688 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.50s 2026-03-23 00:45:29.111703 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.31s 2026-03-23 00:45:29.111741 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.17s 2026-03-23 00:45:29.111755 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.08s 2026-03-23 00:45:29.111767 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.00s 2026-03-23 00:45:29.111780 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.94s 2026-03-23 00:45:29.111794 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.94s 2026-03-23 00:45:29.111806 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.28s 2026-03-23 00:45:29.111820 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-03-23 00:45:29.111834 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-03-23 00:45:29.111846 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-03-23 00:45:29.111856 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-03-23 00:45:29.111869 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-03-23 00:45:29.225355 | orchestrator | 2026-03-23 00:45:29.228570 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Mar 23 00:45:29 UTC 2026 2026-03-23 00:45:29.228650 | orchestrator | 2026-03-23 00:45:30.228789 | orchestrator | 2026-03-23 00:45:30 | INFO  | Collection nutshell is prepared for execution 2026-03-23 00:45:30.331279 | orchestrator | 2026-03-23 00:45:30 | INFO  | A [0] - dotfiles 2026-03-23 00:45:40.372244 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [0] - homer 2026-03-23 00:45:40.372372 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [0] - netdata 2026-03-23 00:45:40.372409 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [0] - openstackclient 2026-03-23 00:45:40.373245 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [0] - phpmyadmin 2026-03-23 00:45:40.373307 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [0] - common 2026-03-23 00:45:40.377714 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [1] -- loadbalancer 2026-03-23 00:45:40.377835 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [2] --- opensearch 2026-03-23 00:45:40.378218 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [2] --- mariadb-ng 2026-03-23 00:45:40.378368 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [3] ---- horizon 2026-03-23 00:45:40.378621 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [3] ---- keystone 2026-03-23 00:45:40.379143 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [4] ----- neutron 2026-03-23 00:45:40.379550 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [5] ------ wait-for-nova 2026-03-23 00:45:40.379945 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [6] ------- octavia 2026-03-23 00:45:40.381793 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [4] ----- barbican 2026-03-23 00:45:40.381841 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [4] ----- designate 2026-03-23 00:45:40.382355 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [4] ----- ironic 2026-03-23 00:45:40.382396 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [4] ----- placement 2026-03-23 00:45:40.382423 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [4] ----- magnum 2026-03-23 00:45:40.384080 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [1] -- openvswitch 2026-03-23 00:45:40.384313 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [2] --- ovn 2026-03-23 00:45:40.384909 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [1] -- memcached 2026-03-23 00:45:40.385208 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [1] -- redis 2026-03-23 00:45:40.385400 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [1] -- rabbitmq-ng 2026-03-23 00:45:40.385831 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [0] - kubernetes 2026-03-23 00:45:40.389379 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [1] -- kubeconfig 2026-03-23 00:45:40.389445 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [1] -- copy-kubeconfig 2026-03-23 00:45:40.390077 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [0] - ceph 2026-03-23 00:45:40.393250 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [1] -- ceph-pools 2026-03-23 00:45:40.393422 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [2] --- copy-ceph-keys 2026-03-23 00:45:40.393695 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [3] ---- cephclient 2026-03-23 00:45:40.394078 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-23 00:45:40.394346 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [4] ----- wait-for-keystone 2026-03-23 00:45:40.394648 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-23 00:45:40.395199 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [5] ------ glance 2026-03-23 00:45:40.395225 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [5] ------ cinder 2026-03-23 00:45:40.395526 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [5] ------ nova 2026-03-23 00:45:40.396149 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [4] ----- prometheus 2026-03-23 00:45:40.396431 | orchestrator | 2026-03-23 00:45:40 | INFO  | A [5] ------ grafana 2026-03-23 00:45:40.597287 | orchestrator | 2026-03-23 00:45:40 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-23 00:45:40.597385 | orchestrator | 2026-03-23 00:45:40 | INFO  | Tasks are running in the background 2026-03-23 00:45:42.251114 | orchestrator | 2026-03-23 00:45:42 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-23 00:45:44.437034 | orchestrator | 2026-03-23 00:45:44 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:45:44.437301 | orchestrator | 2026-03-23 00:45:44 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:45:44.440491 | orchestrator | 2026-03-23 00:45:44 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:45:44.441206 | orchestrator | 2026-03-23 00:45:44 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:45:44.441653 | orchestrator | 2026-03-23 00:45:44 | INFO  | Task 834fe566-4433-4ad5-9593-162c7efa7ba9 is in state STARTED 2026-03-23 00:45:44.442318 | orchestrator | 2026-03-23 00:45:44 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:45:44.443206 | orchestrator | 2026-03-23 00:45:44 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:45:44.443288 | orchestrator | 2026-03-23 00:45:44 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:45:47.489286 | orchestrator | 2026-03-23 00:45:47 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:45:47.489422 | orchestrator | 2026-03-23 00:45:47 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:45:47.492768 | orchestrator | 2026-03-23 00:45:47 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:45:47.497461 | orchestrator | 2026-03-23 00:45:47 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:45:47.497549 | orchestrator | 2026-03-23 00:45:47 | INFO  | Task 834fe566-4433-4ad5-9593-162c7efa7ba9 is in state STARTED 2026-03-23 00:45:47.497570 | orchestrator | 2026-03-23 00:45:47 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:45:47.500894 | orchestrator | 2026-03-23 00:45:47 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:45:47.500968 | orchestrator | 2026-03-23 00:45:47 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:45:50.540361 | orchestrator | 2026-03-23 00:45:50 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:45:50.540451 | orchestrator | 2026-03-23 00:45:50 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:45:50.540936 | orchestrator | 2026-03-23 00:45:50 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:45:50.541330 | orchestrator | 2026-03-23 00:45:50 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:45:50.542486 | orchestrator | 2026-03-23 00:45:50 | INFO  | Task 834fe566-4433-4ad5-9593-162c7efa7ba9 is in state STARTED 2026-03-23 00:45:50.553956 | orchestrator | 2026-03-23 00:45:50 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:45:50.554903 | orchestrator | 2026-03-23 00:45:50 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:45:50.555400 | orchestrator | 2026-03-23 00:45:50 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:45:53.602197 | orchestrator | 2026-03-23 00:45:53 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:45:53.602281 | orchestrator | 2026-03-23 00:45:53 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:45:53.602287 | orchestrator | 2026-03-23 00:45:53 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:45:53.602292 | orchestrator | 2026-03-23 00:45:53 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:45:53.602296 | orchestrator | 2026-03-23 00:45:53 | INFO  | Task 834fe566-4433-4ad5-9593-162c7efa7ba9 is in state STARTED 2026-03-23 00:45:53.602301 | orchestrator | 2026-03-23 00:45:53 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:45:53.602327 | orchestrator | 2026-03-23 00:45:53 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:45:53.602332 | orchestrator | 2026-03-23 00:45:53 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:45:56.641475 | orchestrator | 2026-03-23 00:45:56 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:45:56.641573 | orchestrator | 2026-03-23 00:45:56 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:45:56.641587 | orchestrator | 2026-03-23 00:45:56 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:45:56.642192 | orchestrator | 2026-03-23 00:45:56 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:45:56.642494 | orchestrator | 2026-03-23 00:45:56 | INFO  | Task 834fe566-4433-4ad5-9593-162c7efa7ba9 is in state STARTED 2026-03-23 00:45:56.644712 | orchestrator | 2026-03-23 00:45:56 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:45:56.644802 | orchestrator | 2026-03-23 00:45:56 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:45:56.644816 | orchestrator | 2026-03-23 00:45:56 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:45:59.771570 | orchestrator | 2026-03-23 00:45:59 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:45:59.771714 | orchestrator | 2026-03-23 00:45:59 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:45:59.771864 | orchestrator | 2026-03-23 00:45:59 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:45:59.771906 | orchestrator | 2026-03-23 00:45:59 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:45:59.771926 | orchestrator | 2026-03-23 00:45:59 | INFO  | Task 834fe566-4433-4ad5-9593-162c7efa7ba9 is in state STARTED 2026-03-23 00:45:59.771944 | orchestrator | 2026-03-23 00:45:59 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:45:59.771963 | orchestrator | 2026-03-23 00:45:59 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:45:59.771980 | orchestrator | 2026-03-23 00:45:59 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:02.795395 | orchestrator | 2026-03-23 00:46:02 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:46:02.795488 | orchestrator | 2026-03-23 00:46:02 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:02.797205 | orchestrator | 2026-03-23 00:46:02 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:02.797879 | orchestrator | 2026-03-23 00:46:02 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:46:02.799368 | orchestrator | 2026-03-23 00:46:02 | INFO  | Task 834fe566-4433-4ad5-9593-162c7efa7ba9 is in state STARTED 2026-03-23 00:46:02.802013 | orchestrator | 2026-03-23 00:46:02 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:02.802445 | orchestrator | 2026-03-23 00:46:02 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:02.802471 | orchestrator | 2026-03-23 00:46:02 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:05.894235 | orchestrator | 2026-03-23 00:46:05 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:05.898387 | orchestrator | 2026-03-23 00:46:05 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:46:05.900608 | orchestrator | 2026-03-23 00:46:05 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:05.901020 | orchestrator | 2026-03-23 00:46:05 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:05.901883 | orchestrator | 2026-03-23 00:46:05 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:46:05.902396 | orchestrator | 2026-03-23 00:46:05 | INFO  | Task 834fe566-4433-4ad5-9593-162c7efa7ba9 is in state SUCCESS 2026-03-23 00:46:05.902938 | orchestrator | 2026-03-23 00:46:05.902970 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-23 00:46:05.902980 | orchestrator | 2026-03-23 00:46:05.902988 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-23 00:46:05.902995 | orchestrator | Monday 23 March 2026 00:45:50 +0000 (0:00:00.675) 0:00:00.675 ********** 2026-03-23 00:46:05.903003 | orchestrator | changed: [testbed-manager] 2026-03-23 00:46:05.903011 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:46:05.903018 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:46:05.903025 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:46:05.903032 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:46:05.903039 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:46:05.903046 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:46:05.903054 | orchestrator | 2026-03-23 00:46:05.903061 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-23 00:46:05.903068 | orchestrator | Monday 23 March 2026 00:45:54 +0000 (0:00:04.544) 0:00:05.220 ********** 2026-03-23 00:46:05.903075 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-23 00:46:05.903083 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-23 00:46:05.903091 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-23 00:46:05.903098 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-23 00:46:05.903105 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-23 00:46:05.903112 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-23 00:46:05.903119 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-23 00:46:05.903126 | orchestrator | 2026-03-23 00:46:05.903133 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-23 00:46:05.903140 | orchestrator | Monday 23 March 2026 00:45:56 +0000 (0:00:01.960) 0:00:07.181 ********** 2026-03-23 00:46:05.903165 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-23 00:45:56.010415', 'end': '2026-03-23 00:45:56.017866', 'delta': '0:00:00.007451', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-23 00:46:05.903190 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-23 00:45:56.072540', 'end': '2026-03-23 00:45:56.079688', 'delta': '0:00:00.007148', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-23 00:46:05.903224 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-23 00:45:55.647234', 'end': '2026-03-23 00:45:55.655172', 'delta': '0:00:00.007938', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-23 00:46:05.903250 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-23 00:45:56.373092', 'end': '2026-03-23 00:45:56.380346', 'delta': '0:00:00.007254', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-23 00:46:05.903259 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-23 00:45:56.489701', 'end': '2026-03-23 00:45:56.500864', 'delta': '0:00:00.011163', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-23 00:46:05.903266 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-23 00:45:56.115863', 'end': '2026-03-23 00:45:56.122226', 'delta': '0:00:00.006363', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-23 00:46:05.903274 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-23 00:45:55.598123', 'end': '2026-03-23 00:45:55.602927', 'delta': '0:00:00.004804', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-23 00:46:05.903287 | orchestrator | 2026-03-23 00:46:05.903294 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-23 00:46:05.903302 | orchestrator | Monday 23 March 2026 00:45:58 +0000 (0:00:01.364) 0:00:08.545 ********** 2026-03-23 00:46:05.903309 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-23 00:46:05.903316 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-23 00:46:05.903324 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-23 00:46:05.903331 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-23 00:46:05.903338 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-23 00:46:05.903345 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-23 00:46:05.903352 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-23 00:46:05.903359 | orchestrator | 2026-03-23 00:46:05.903366 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-23 00:46:05.903377 | orchestrator | Monday 23 March 2026 00:45:59 +0000 (0:00:01.245) 0:00:09.790 ********** 2026-03-23 00:46:05.903385 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-23 00:46:05.903392 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-23 00:46:05.903399 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-23 00:46:05.903406 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-23 00:46:05.903413 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-23 00:46:05.903420 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-23 00:46:05.903427 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-23 00:46:05.903434 | orchestrator | 2026-03-23 00:46:05.903442 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:46:05.903454 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:46:05.903462 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:46:05.903470 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:46:05.903477 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:46:05.903484 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:46:05.903491 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:46:05.903498 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:46:05.903506 | orchestrator | 2026-03-23 00:46:05.903513 | orchestrator | 2026-03-23 00:46:05.903520 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:46:05.903530 | orchestrator | Monday 23 March 2026 00:46:03 +0000 (0:00:04.202) 0:00:13.993 ********** 2026-03-23 00:46:05.903542 | orchestrator | =============================================================================== 2026-03-23 00:46:05.903554 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.54s 2026-03-23 00:46:05.903565 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.20s 2026-03-23 00:46:05.903577 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.96s 2026-03-23 00:46:05.903597 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.36s 2026-03-23 00:46:05.903609 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.25s 2026-03-23 00:46:05.903868 | orchestrator | 2026-03-23 00:46:05 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:05.904583 | orchestrator | 2026-03-23 00:46:05 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:05.904605 | orchestrator | 2026-03-23 00:46:05 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:09.213293 | orchestrator | 2026-03-23 00:46:09 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:09.217532 | orchestrator | 2026-03-23 00:46:09 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:46:09.222073 | orchestrator | 2026-03-23 00:46:09 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:09.222579 | orchestrator | 2026-03-23 00:46:09 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:09.223432 | orchestrator | 2026-03-23 00:46:09 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:46:09.224877 | orchestrator | 2026-03-23 00:46:09 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:09.226225 | orchestrator | 2026-03-23 00:46:09 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:09.226257 | orchestrator | 2026-03-23 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:12.305267 | orchestrator | 2026-03-23 00:46:12 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:12.305356 | orchestrator | 2026-03-23 00:46:12 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:46:12.306314 | orchestrator | 2026-03-23 00:46:12 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:12.306858 | orchestrator | 2026-03-23 00:46:12 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:12.307962 | orchestrator | 2026-03-23 00:46:12 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:46:12.308008 | orchestrator | 2026-03-23 00:46:12 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:12.309964 | orchestrator | 2026-03-23 00:46:12 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:12.310008 | orchestrator | 2026-03-23 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:15.334388 | orchestrator | 2026-03-23 00:46:15 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:15.334848 | orchestrator | 2026-03-23 00:46:15 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:46:15.336255 | orchestrator | 2026-03-23 00:46:15 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:15.339408 | orchestrator | 2026-03-23 00:46:15 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:15.340168 | orchestrator | 2026-03-23 00:46:15 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:46:15.341019 | orchestrator | 2026-03-23 00:46:15 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:15.341880 | orchestrator | 2026-03-23 00:46:15 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:15.342010 | orchestrator | 2026-03-23 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:18.417674 | orchestrator | 2026-03-23 00:46:18 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:18.417857 | orchestrator | 2026-03-23 00:46:18 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:46:18.418563 | orchestrator | 2026-03-23 00:46:18 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:18.419967 | orchestrator | 2026-03-23 00:46:18 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:18.420526 | orchestrator | 2026-03-23 00:46:18 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:46:18.421241 | orchestrator | 2026-03-23 00:46:18 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:18.421813 | orchestrator | 2026-03-23 00:46:18 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:18.421852 | orchestrator | 2026-03-23 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:21.496447 | orchestrator | 2026-03-23 00:46:21 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:21.496518 | orchestrator | 2026-03-23 00:46:21 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:46:21.498119 | orchestrator | 2026-03-23 00:46:21 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:21.526074 | orchestrator | 2026-03-23 00:46:21 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:21.526166 | orchestrator | 2026-03-23 00:46:21 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:46:21.526176 | orchestrator | 2026-03-23 00:46:21 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:21.526182 | orchestrator | 2026-03-23 00:46:21 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:21.526190 | orchestrator | 2026-03-23 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:24.573209 | orchestrator | 2026-03-23 00:46:24 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:24.573318 | orchestrator | 2026-03-23 00:46:24 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:46:24.573329 | orchestrator | 2026-03-23 00:46:24 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:24.573337 | orchestrator | 2026-03-23 00:46:24 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:24.573345 | orchestrator | 2026-03-23 00:46:24 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:46:24.573353 | orchestrator | 2026-03-23 00:46:24 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:24.573358 | orchestrator | 2026-03-23 00:46:24 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:24.573363 | orchestrator | 2026-03-23 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:27.612831 | orchestrator | 2026-03-23 00:46:27 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:27.614200 | orchestrator | 2026-03-23 00:46:27 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:46:27.614620 | orchestrator | 2026-03-23 00:46:27 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:27.615123 | orchestrator | 2026-03-23 00:46:27 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:27.617146 | orchestrator | 2026-03-23 00:46:27 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state STARTED 2026-03-23 00:46:27.617199 | orchestrator | 2026-03-23 00:46:27 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:27.617960 | orchestrator | 2026-03-23 00:46:27 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:27.617982 | orchestrator | 2026-03-23 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:30.666097 | orchestrator | 2026-03-23 00:46:30 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:30.667624 | orchestrator | 2026-03-23 00:46:30 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:46:30.671314 | orchestrator | 2026-03-23 00:46:30 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:30.676043 | orchestrator | 2026-03-23 00:46:30 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:30.676091 | orchestrator | 2026-03-23 00:46:30 | INFO  | Task adda46d8-f6dd-44b2-95b7-67b07650821c is in state SUCCESS 2026-03-23 00:46:30.678454 | orchestrator | 2026-03-23 00:46:30 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:30.681875 | orchestrator | 2026-03-23 00:46:30 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:30.681943 | orchestrator | 2026-03-23 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:33.793999 | orchestrator | 2026-03-23 00:46:33 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:33.794128 | orchestrator | 2026-03-23 00:46:33 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:46:33.795043 | orchestrator | 2026-03-23 00:46:33 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:33.796028 | orchestrator | 2026-03-23 00:46:33 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:33.798312 | orchestrator | 2026-03-23 00:46:33 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:33.798597 | orchestrator | 2026-03-23 00:46:33 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:33.799281 | orchestrator | 2026-03-23 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:36.836397 | orchestrator | 2026-03-23 00:46:36 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:36.836483 | orchestrator | 2026-03-23 00:46:36 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state STARTED 2026-03-23 00:46:36.838292 | orchestrator | 2026-03-23 00:46:36 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:36.838595 | orchestrator | 2026-03-23 00:46:36 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:36.839757 | orchestrator | 2026-03-23 00:46:36 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:36.841536 | orchestrator | 2026-03-23 00:46:36 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:36.841572 | orchestrator | 2026-03-23 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:39.884617 | orchestrator | 2026-03-23 00:46:39 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:39.885028 | orchestrator | 2026-03-23 00:46:39 | INFO  | Task f830c5cb-b3a3-428a-ba3b-1f759334d374 is in state SUCCESS 2026-03-23 00:46:39.886336 | orchestrator | 2026-03-23 00:46:39 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:39.887582 | orchestrator | 2026-03-23 00:46:39 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:39.888603 | orchestrator | 2026-03-23 00:46:39 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:39.889805 | orchestrator | 2026-03-23 00:46:39 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:39.889928 | orchestrator | 2026-03-23 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:42.944741 | orchestrator | 2026-03-23 00:46:42 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:42.945692 | orchestrator | 2026-03-23 00:46:42 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:42.947206 | orchestrator | 2026-03-23 00:46:42 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:42.948735 | orchestrator | 2026-03-23 00:46:42 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:42.949864 | orchestrator | 2026-03-23 00:46:42 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:42.949897 | orchestrator | 2026-03-23 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:46.004338 | orchestrator | 2026-03-23 00:46:46 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:46.005235 | orchestrator | 2026-03-23 00:46:46 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:46.008484 | orchestrator | 2026-03-23 00:46:46 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:46.008539 | orchestrator | 2026-03-23 00:46:46 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:46.008560 | orchestrator | 2026-03-23 00:46:46 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:46.008568 | orchestrator | 2026-03-23 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:49.059463 | orchestrator | 2026-03-23 00:46:49 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:49.060465 | orchestrator | 2026-03-23 00:46:49 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:49.060880 | orchestrator | 2026-03-23 00:46:49 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:49.063068 | orchestrator | 2026-03-23 00:46:49 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:49.064097 | orchestrator | 2026-03-23 00:46:49 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:49.064797 | orchestrator | 2026-03-23 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:52.129711 | orchestrator | 2026-03-23 00:46:52 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:52.132201 | orchestrator | 2026-03-23 00:46:52 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:52.134891 | orchestrator | 2026-03-23 00:46:52 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:52.136649 | orchestrator | 2026-03-23 00:46:52 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:52.140103 | orchestrator | 2026-03-23 00:46:52 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:52.140173 | orchestrator | 2026-03-23 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:55.197385 | orchestrator | 2026-03-23 00:46:55 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:55.197857 | orchestrator | 2026-03-23 00:46:55 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:55.198558 | orchestrator | 2026-03-23 00:46:55 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:55.199947 | orchestrator | 2026-03-23 00:46:55 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:55.199978 | orchestrator | 2026-03-23 00:46:55 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:55.199983 | orchestrator | 2026-03-23 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:46:58.242702 | orchestrator | 2026-03-23 00:46:58 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:46:58.242916 | orchestrator | 2026-03-23 00:46:58 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:46:58.244643 | orchestrator | 2026-03-23 00:46:58 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:46:58.246311 | orchestrator | 2026-03-23 00:46:58 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:46:58.247856 | orchestrator | 2026-03-23 00:46:58 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:46:58.247888 | orchestrator | 2026-03-23 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:01.322238 | orchestrator | 2026-03-23 00:47:01 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:47:01.322299 | orchestrator | 2026-03-23 00:47:01 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:01.322863 | orchestrator | 2026-03-23 00:47:01 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:01.329214 | orchestrator | 2026-03-23 00:47:01 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:47:01.330651 | orchestrator | 2026-03-23 00:47:01 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:01.330684 | orchestrator | 2026-03-23 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:04.395827 | orchestrator | 2026-03-23 00:47:04 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:47:04.396897 | orchestrator | 2026-03-23 00:47:04 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:04.397954 | orchestrator | 2026-03-23 00:47:04 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:04.398667 | orchestrator | 2026-03-23 00:47:04 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:47:04.399407 | orchestrator | 2026-03-23 00:47:04 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:04.399478 | orchestrator | 2026-03-23 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:07.431960 | orchestrator | 2026-03-23 00:47:07 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:47:07.434070 | orchestrator | 2026-03-23 00:47:07 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:07.434143 | orchestrator | 2026-03-23 00:47:07 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:07.434702 | orchestrator | 2026-03-23 00:47:07 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:47:07.436495 | orchestrator | 2026-03-23 00:47:07 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:07.436537 | orchestrator | 2026-03-23 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:10.465584 | orchestrator | 2026-03-23 00:47:10 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:47:10.468284 | orchestrator | 2026-03-23 00:47:10 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:10.469286 | orchestrator | 2026-03-23 00:47:10 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:10.470289 | orchestrator | 2026-03-23 00:47:10 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:47:10.471368 | orchestrator | 2026-03-23 00:47:10 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:10.471384 | orchestrator | 2026-03-23 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:13.503141 | orchestrator | 2026-03-23 00:47:13 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:47:13.503818 | orchestrator | 2026-03-23 00:47:13 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:13.504753 | orchestrator | 2026-03-23 00:47:13 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:13.505561 | orchestrator | 2026-03-23 00:47:13 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:47:13.506403 | orchestrator | 2026-03-23 00:47:13 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:13.506436 | orchestrator | 2026-03-23 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:16.545965 | orchestrator | 2026-03-23 00:47:16 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:47:16.548324 | orchestrator | 2026-03-23 00:47:16 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:16.549670 | orchestrator | 2026-03-23 00:47:16 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:16.551443 | orchestrator | 2026-03-23 00:47:16 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state STARTED 2026-03-23 00:47:16.552895 | orchestrator | 2026-03-23 00:47:16 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:16.553425 | orchestrator | 2026-03-23 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:19.592915 | orchestrator | 2026-03-23 00:47:19 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:47:19.593604 | orchestrator | 2026-03-23 00:47:19 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:19.595533 | orchestrator | 2026-03-23 00:47:19 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:19.596222 | orchestrator | 2026-03-23 00:47:19 | INFO  | Task 6c52269d-5feb-4cc7-a52c-14ac7f528b18 is in state SUCCESS 2026-03-23 00:47:19.598544 | orchestrator | 2026-03-23 00:47:19.598577 | orchestrator | 2026-03-23 00:47:19.598585 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-23 00:47:19.598593 | orchestrator | 2026-03-23 00:47:19.598600 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-23 00:47:19.598608 | orchestrator | Monday 23 March 2026 00:45:50 +0000 (0:00:00.810) 0:00:00.810 ********** 2026-03-23 00:47:19.598615 | orchestrator | ok: [testbed-manager] => { 2026-03-23 00:47:19.598634 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-23 00:47:19.598642 | orchestrator | } 2026-03-23 00:47:19.598649 | orchestrator | 2026-03-23 00:47:19.598655 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-23 00:47:19.598665 | orchestrator | Monday 23 March 2026 00:45:50 +0000 (0:00:00.520) 0:00:01.331 ********** 2026-03-23 00:47:19.598669 | orchestrator | ok: [testbed-manager] 2026-03-23 00:47:19.598673 | orchestrator | 2026-03-23 00:47:19.598677 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-23 00:47:19.598681 | orchestrator | Monday 23 March 2026 00:45:52 +0000 (0:00:01.732) 0:00:03.063 ********** 2026-03-23 00:47:19.598685 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-23 00:47:19.598689 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-23 00:47:19.598693 | orchestrator | 2026-03-23 00:47:19.598696 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-23 00:47:19.598700 | orchestrator | Monday 23 March 2026 00:45:54 +0000 (0:00:01.476) 0:00:04.540 ********** 2026-03-23 00:47:19.598705 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.598711 | orchestrator | 2026-03-23 00:47:19.598717 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-23 00:47:19.598763 | orchestrator | Monday 23 March 2026 00:45:56 +0000 (0:00:02.377) 0:00:06.918 ********** 2026-03-23 00:47:19.598773 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.598779 | orchestrator | 2026-03-23 00:47:19.598785 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-23 00:47:19.598792 | orchestrator | Monday 23 March 2026 00:45:59 +0000 (0:00:02.508) 0:00:09.426 ********** 2026-03-23 00:47:19.598798 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-23 00:47:19.598804 | orchestrator | ok: [testbed-manager] 2026-03-23 00:47:19.598811 | orchestrator | 2026-03-23 00:47:19.598817 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-23 00:47:19.598824 | orchestrator | Monday 23 March 2026 00:46:25 +0000 (0:00:26.239) 0:00:35.665 ********** 2026-03-23 00:47:19.598830 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.598837 | orchestrator | 2026-03-23 00:47:19.598844 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:47:19.598848 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:47:19.598853 | orchestrator | 2026-03-23 00:47:19.598857 | orchestrator | 2026-03-23 00:47:19.598860 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:47:19.598864 | orchestrator | Monday 23 March 2026 00:46:28 +0000 (0:00:03.181) 0:00:38.847 ********** 2026-03-23 00:47:19.598897 | orchestrator | =============================================================================== 2026-03-23 00:47:19.598901 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.24s 2026-03-23 00:47:19.598905 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.18s 2026-03-23 00:47:19.598909 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.51s 2026-03-23 00:47:19.598913 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.38s 2026-03-23 00:47:19.598917 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.73s 2026-03-23 00:47:19.598920 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.48s 2026-03-23 00:47:19.598924 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.52s 2026-03-23 00:47:19.598928 | orchestrator | 2026-03-23 00:47:19.598932 | orchestrator | 2026-03-23 00:47:19.598935 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-23 00:47:19.598939 | orchestrator | 2026-03-23 00:47:19.598943 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-23 00:47:19.598954 | orchestrator | Monday 23 March 2026 00:45:50 +0000 (0:00:00.511) 0:00:00.511 ********** 2026-03-23 00:47:19.598958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-23 00:47:19.598965 | orchestrator | 2026-03-23 00:47:19.598971 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-23 00:47:19.598977 | orchestrator | Monday 23 March 2026 00:45:50 +0000 (0:00:00.419) 0:00:00.931 ********** 2026-03-23 00:47:19.598983 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-23 00:47:19.598989 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-23 00:47:19.598995 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-23 00:47:19.599002 | orchestrator | 2026-03-23 00:47:19.599008 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-23 00:47:19.599015 | orchestrator | Monday 23 March 2026 00:45:53 +0000 (0:00:03.394) 0:00:04.326 ********** 2026-03-23 00:47:19.599022 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.599028 | orchestrator | 2026-03-23 00:47:19.599034 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-23 00:47:19.599041 | orchestrator | Monday 23 March 2026 00:45:56 +0000 (0:00:02.184) 0:00:06.510 ********** 2026-03-23 00:47:19.599057 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-23 00:47:19.599065 | orchestrator | ok: [testbed-manager] 2026-03-23 00:47:19.599071 | orchestrator | 2026-03-23 00:47:19.599077 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-23 00:47:19.599084 | orchestrator | Monday 23 March 2026 00:46:30 +0000 (0:00:34.466) 0:00:40.976 ********** 2026-03-23 00:47:19.599090 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.599097 | orchestrator | 2026-03-23 00:47:19.599103 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-23 00:47:19.599110 | orchestrator | Monday 23 March 2026 00:46:31 +0000 (0:00:01.071) 0:00:42.048 ********** 2026-03-23 00:47:19.599114 | orchestrator | ok: [testbed-manager] 2026-03-23 00:47:19.599119 | orchestrator | 2026-03-23 00:47:19.599126 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-23 00:47:19.599136 | orchestrator | Monday 23 March 2026 00:46:32 +0000 (0:00:00.931) 0:00:42.980 ********** 2026-03-23 00:47:19.599142 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.599148 | orchestrator | 2026-03-23 00:47:19.599154 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-23 00:47:19.599161 | orchestrator | Monday 23 March 2026 00:46:34 +0000 (0:00:02.018) 0:00:44.998 ********** 2026-03-23 00:47:19.599167 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.599174 | orchestrator | 2026-03-23 00:47:19.599180 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-23 00:47:19.599187 | orchestrator | Monday 23 March 2026 00:46:35 +0000 (0:00:00.793) 0:00:45.791 ********** 2026-03-23 00:47:19.599193 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.599200 | orchestrator | 2026-03-23 00:47:19.599207 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-23 00:47:19.599213 | orchestrator | Monday 23 March 2026 00:46:36 +0000 (0:00:00.967) 0:00:46.759 ********** 2026-03-23 00:47:19.599220 | orchestrator | ok: [testbed-manager] 2026-03-23 00:47:19.599226 | orchestrator | 2026-03-23 00:47:19.599230 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:47:19.599236 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:47:19.599244 | orchestrator | 2026-03-23 00:47:19.599249 | orchestrator | 2026-03-23 00:47:19.599256 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:47:19.599264 | orchestrator | Monday 23 March 2026 00:46:36 +0000 (0:00:00.543) 0:00:47.303 ********** 2026-03-23 00:47:19.599268 | orchestrator | =============================================================================== 2026-03-23 00:47:19.599273 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.47s 2026-03-23 00:47:19.599277 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.39s 2026-03-23 00:47:19.599281 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.18s 2026-03-23 00:47:19.599286 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.02s 2026-03-23 00:47:19.599290 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.07s 2026-03-23 00:47:19.599294 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.97s 2026-03-23 00:47:19.599298 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.93s 2026-03-23 00:47:19.599302 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.79s 2026-03-23 00:47:19.599307 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.55s 2026-03-23 00:47:19.599311 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.42s 2026-03-23 00:47:19.599315 | orchestrator | 2026-03-23 00:47:19.599320 | orchestrator | 2026-03-23 00:47:19.599326 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 00:47:19.599336 | orchestrator | 2026-03-23 00:47:19.599342 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 00:47:19.599348 | orchestrator | Monday 23 March 2026 00:45:49 +0000 (0:00:00.614) 0:00:00.614 ********** 2026-03-23 00:47:19.599353 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-23 00:47:19.599359 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-23 00:47:19.599365 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-23 00:47:19.599371 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-23 00:47:19.599376 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-23 00:47:19.599382 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-23 00:47:19.599388 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-23 00:47:19.599394 | orchestrator | 2026-03-23 00:47:19.599401 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-23 00:47:19.599407 | orchestrator | 2026-03-23 00:47:19.599412 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-23 00:47:19.599418 | orchestrator | Monday 23 March 2026 00:45:51 +0000 (0:00:01.401) 0:00:02.015 ********** 2026-03-23 00:47:19.599435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:47:19.599443 | orchestrator | 2026-03-23 00:47:19.599449 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-23 00:47:19.599459 | orchestrator | Monday 23 March 2026 00:45:52 +0000 (0:00:01.632) 0:00:03.648 ********** 2026-03-23 00:47:19.599469 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:47:19.599475 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:47:19.599482 | orchestrator | ok: [testbed-manager] 2026-03-23 00:47:19.599488 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:47:19.599494 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:47:19.599505 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:47:19.599512 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:47:19.599518 | orchestrator | 2026-03-23 00:47:19.599524 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-23 00:47:19.599530 | orchestrator | Monday 23 March 2026 00:45:54 +0000 (0:00:01.998) 0:00:05.647 ********** 2026-03-23 00:47:19.599536 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:47:19.599543 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:47:19.599554 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:47:19.599561 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:47:19.599568 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:47:19.599575 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:47:19.599581 | orchestrator | ok: [testbed-manager] 2026-03-23 00:47:19.599587 | orchestrator | 2026-03-23 00:47:19.599593 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-23 00:47:19.599600 | orchestrator | Monday 23 March 2026 00:45:58 +0000 (0:00:04.114) 0:00:09.764 ********** 2026-03-23 00:47:19.599604 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:47:19.599609 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:47:19.599613 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.599617 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:47:19.599621 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:47:19.599625 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:47:19.599630 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:47:19.599634 | orchestrator | 2026-03-23 00:47:19.599638 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-23 00:47:19.599643 | orchestrator | Monday 23 March 2026 00:46:01 +0000 (0:00:02.221) 0:00:11.986 ********** 2026-03-23 00:47:19.599647 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:47:19.599651 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:47:19.599655 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:47:19.599660 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:47:19.599664 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.599668 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:47:19.599672 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:47:19.599676 | orchestrator | 2026-03-23 00:47:19.599681 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-23 00:47:19.599685 | orchestrator | Monday 23 March 2026 00:46:11 +0000 (0:00:09.985) 0:00:21.971 ********** 2026-03-23 00:47:19.599690 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:47:19.599694 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:47:19.599698 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:47:19.599703 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:47:19.599707 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:47:19.599711 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:47:19.599715 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.599719 | orchestrator | 2026-03-23 00:47:19.599723 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-23 00:47:19.599756 | orchestrator | Monday 23 March 2026 00:46:51 +0000 (0:00:39.967) 0:01:01.939 ********** 2026-03-23 00:47:19.599762 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:47:19.599767 | orchestrator | 2026-03-23 00:47:19.599773 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-23 00:47:19.599779 | orchestrator | Monday 23 March 2026 00:46:52 +0000 (0:00:01.562) 0:01:03.501 ********** 2026-03-23 00:47:19.599789 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-23 00:47:19.599797 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-23 00:47:19.599803 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-23 00:47:19.599809 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-23 00:47:19.599816 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-23 00:47:19.599822 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-23 00:47:19.599828 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-23 00:47:19.599835 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-23 00:47:19.599840 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-23 00:47:19.599847 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-23 00:47:19.599858 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-23 00:47:19.599864 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-23 00:47:19.599871 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-23 00:47:19.599878 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-23 00:47:19.599885 | orchestrator | 2026-03-23 00:47:19.599892 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-23 00:47:19.599899 | orchestrator | Monday 23 March 2026 00:46:57 +0000 (0:00:04.503) 0:01:08.004 ********** 2026-03-23 00:47:19.599906 | orchestrator | ok: [testbed-manager] 2026-03-23 00:47:19.599912 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:47:19.599919 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:47:19.599925 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:47:19.599931 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:47:19.599938 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:47:19.599945 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:47:19.599951 | orchestrator | 2026-03-23 00:47:19.599990 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-23 00:47:19.599996 | orchestrator | Monday 23 March 2026 00:46:58 +0000 (0:00:01.214) 0:01:09.218 ********** 2026-03-23 00:47:19.600002 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:47:19.600009 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.600015 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:47:19.600021 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:47:19.600028 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:47:19.600034 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:47:19.600041 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:47:19.600047 | orchestrator | 2026-03-23 00:47:19.600054 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-23 00:47:19.600066 | orchestrator | Monday 23 March 2026 00:46:59 +0000 (0:00:01.211) 0:01:10.430 ********** 2026-03-23 00:47:19.600073 | orchestrator | ok: [testbed-manager] 2026-03-23 00:47:19.600079 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:47:19.600087 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:47:19.600094 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:47:19.600100 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:47:19.600107 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:47:19.600113 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:47:19.600119 | orchestrator | 2026-03-23 00:47:19.600126 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-23 00:47:19.600132 | orchestrator | Monday 23 March 2026 00:47:01 +0000 (0:00:01.521) 0:01:11.951 ********** 2026-03-23 00:47:19.600139 | orchestrator | ok: [testbed-manager] 2026-03-23 00:47:19.600145 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:47:19.600151 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:47:19.600157 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:47:19.600164 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:47:19.600170 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:47:19.600176 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:47:19.600183 | orchestrator | 2026-03-23 00:47:19.600209 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-23 00:47:19.600217 | orchestrator | Monday 23 March 2026 00:47:03 +0000 (0:00:02.077) 0:01:14.029 ********** 2026-03-23 00:47:19.600224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-23 00:47:19.600232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:47:19.600239 | orchestrator | 2026-03-23 00:47:19.600245 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-23 00:47:19.600252 | orchestrator | Monday 23 March 2026 00:47:04 +0000 (0:00:01.375) 0:01:15.405 ********** 2026-03-23 00:47:19.600258 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.600269 | orchestrator | 2026-03-23 00:47:19.600276 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-23 00:47:19.600282 | orchestrator | Monday 23 March 2026 00:47:06 +0000 (0:00:01.945) 0:01:17.350 ********** 2026-03-23 00:47:19.600289 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:47:19.600295 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:47:19.600302 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:47:19.600308 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:47:19.600315 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:47:19.600321 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:47:19.600328 | orchestrator | changed: [testbed-manager] 2026-03-23 00:47:19.600334 | orchestrator | 2026-03-23 00:47:19.600340 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:47:19.600347 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:47:19.600354 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:47:19.600384 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:47:19.600392 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:47:19.600399 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:47:19.600405 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:47:19.600412 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:47:19.600418 | orchestrator | 2026-03-23 00:47:19.600425 | orchestrator | 2026-03-23 00:47:19.600431 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:47:19.600438 | orchestrator | Monday 23 March 2026 00:47:17 +0000 (0:00:11.385) 0:01:28.736 ********** 2026-03-23 00:47:19.600444 | orchestrator | =============================================================================== 2026-03-23 00:47:19.600451 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.97s 2026-03-23 00:47:19.600457 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.39s 2026-03-23 00:47:19.600463 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.99s 2026-03-23 00:47:19.600470 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.50s 2026-03-23 00:47:19.600476 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.11s 2026-03-23 00:47:19.600483 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.22s 2026-03-23 00:47:19.600489 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.08s 2026-03-23 00:47:19.600496 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.00s 2026-03-23 00:47:19.600502 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.95s 2026-03-23 00:47:19.600508 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.63s 2026-03-23 00:47:19.600515 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.56s 2026-03-23 00:47:19.600526 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.52s 2026-03-23 00:47:19.600533 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.40s 2026-03-23 00:47:19.600539 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.38s 2026-03-23 00:47:19.600546 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.21s 2026-03-23 00:47:19.600557 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.21s 2026-03-23 00:47:19.600564 | orchestrator | 2026-03-23 00:47:19 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:19.600571 | orchestrator | 2026-03-23 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:22.649549 | orchestrator | 2026-03-23 00:47:22 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state STARTED 2026-03-23 00:47:22.652245 | orchestrator | 2026-03-23 00:47:22 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:22.654039 | orchestrator | 2026-03-23 00:47:22 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:22.655768 | orchestrator | 2026-03-23 00:47:22 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:22.655807 | orchestrator | 2026-03-23 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:25.703605 | orchestrator | 2026-03-23 00:47:25 | INFO  | Task f8c10f33-a5e7-4678-b1da-ce8fa3e4ef87 is in state SUCCESS 2026-03-23 00:47:25.705567 | orchestrator | 2026-03-23 00:47:25 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:25.706868 | orchestrator | 2026-03-23 00:47:25 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:25.708574 | orchestrator | 2026-03-23 00:47:25 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:25.708595 | orchestrator | 2026-03-23 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:28.759905 | orchestrator | 2026-03-23 00:47:28 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:28.763446 | orchestrator | 2026-03-23 00:47:28 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:28.768938 | orchestrator | 2026-03-23 00:47:28 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:28.768991 | orchestrator | 2026-03-23 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:31.821574 | orchestrator | 2026-03-23 00:47:31 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:31.821834 | orchestrator | 2026-03-23 00:47:31 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:31.824466 | orchestrator | 2026-03-23 00:47:31 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:31.824506 | orchestrator | 2026-03-23 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:34.878425 | orchestrator | 2026-03-23 00:47:34 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:34.881220 | orchestrator | 2026-03-23 00:47:34 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:34.883480 | orchestrator | 2026-03-23 00:47:34 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:34.883522 | orchestrator | 2026-03-23 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:37.935503 | orchestrator | 2026-03-23 00:47:37 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:37.938488 | orchestrator | 2026-03-23 00:47:37 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:37.940178 | orchestrator | 2026-03-23 00:47:37 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:37.940255 | orchestrator | 2026-03-23 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:40.994064 | orchestrator | 2026-03-23 00:47:40 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:40.997498 | orchestrator | 2026-03-23 00:47:40 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:41.000163 | orchestrator | 2026-03-23 00:47:40 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:41.000219 | orchestrator | 2026-03-23 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:44.037489 | orchestrator | 2026-03-23 00:47:44 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:44.037992 | orchestrator | 2026-03-23 00:47:44 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:44.038913 | orchestrator | 2026-03-23 00:47:44 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:44.039205 | orchestrator | 2026-03-23 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:47.085087 | orchestrator | 2026-03-23 00:47:47 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:47.088154 | orchestrator | 2026-03-23 00:47:47 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:47.088231 | orchestrator | 2026-03-23 00:47:47 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:47.088237 | orchestrator | 2026-03-23 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:50.151882 | orchestrator | 2026-03-23 00:47:50 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:50.152637 | orchestrator | 2026-03-23 00:47:50 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:50.154249 | orchestrator | 2026-03-23 00:47:50 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:50.154289 | orchestrator | 2026-03-23 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:53.222607 | orchestrator | 2026-03-23 00:47:53 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:53.225867 | orchestrator | 2026-03-23 00:47:53 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:53.228069 | orchestrator | 2026-03-23 00:47:53 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:53.228125 | orchestrator | 2026-03-23 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:56.281889 | orchestrator | 2026-03-23 00:47:56 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:56.283490 | orchestrator | 2026-03-23 00:47:56 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:56.286788 | orchestrator | 2026-03-23 00:47:56 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:56.286859 | orchestrator | 2026-03-23 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:47:59.335333 | orchestrator | 2026-03-23 00:47:59 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:47:59.337105 | orchestrator | 2026-03-23 00:47:59 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:47:59.340590 | orchestrator | 2026-03-23 00:47:59 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:47:59.340644 | orchestrator | 2026-03-23 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:02.383584 | orchestrator | 2026-03-23 00:48:02 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:48:02.385004 | orchestrator | 2026-03-23 00:48:02 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:02.388838 | orchestrator | 2026-03-23 00:48:02 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:02.388897 | orchestrator | 2026-03-23 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:05.433331 | orchestrator | 2026-03-23 00:48:05 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:48:05.435082 | orchestrator | 2026-03-23 00:48:05 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:05.437610 | orchestrator | 2026-03-23 00:48:05 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:05.437661 | orchestrator | 2026-03-23 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:08.474604 | orchestrator | 2026-03-23 00:48:08 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:48:08.477533 | orchestrator | 2026-03-23 00:48:08 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:08.480427 | orchestrator | 2026-03-23 00:48:08 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:08.480696 | orchestrator | 2026-03-23 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:11.534438 | orchestrator | 2026-03-23 00:48:11 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:48:11.536630 | orchestrator | 2026-03-23 00:48:11 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:11.538307 | orchestrator | 2026-03-23 00:48:11 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:11.538548 | orchestrator | 2026-03-23 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:14.578636 | orchestrator | 2026-03-23 00:48:14 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:48:14.579908 | orchestrator | 2026-03-23 00:48:14 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:14.581251 | orchestrator | 2026-03-23 00:48:14 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:14.581404 | orchestrator | 2026-03-23 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:17.627005 | orchestrator | 2026-03-23 00:48:17 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state STARTED 2026-03-23 00:48:17.629836 | orchestrator | 2026-03-23 00:48:17 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:17.632141 | orchestrator | 2026-03-23 00:48:17 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:17.632191 | orchestrator | 2026-03-23 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:20.671849 | orchestrator | 2026-03-23 00:48:20.671896 | orchestrator | 2026-03-23 00:48:20.671901 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-23 00:48:20.671905 | orchestrator | 2026-03-23 00:48:20.671908 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-23 00:48:20.671911 | orchestrator | Monday 23 March 2026 00:46:08 +0000 (0:00:00.337) 0:00:00.337 ********** 2026-03-23 00:48:20.671915 | orchestrator | ok: [testbed-manager] 2026-03-23 00:48:20.671919 | orchestrator | 2026-03-23 00:48:20.671922 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-23 00:48:20.671925 | orchestrator | Monday 23 March 2026 00:46:10 +0000 (0:00:02.092) 0:00:02.430 ********** 2026-03-23 00:48:20.671936 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-23 00:48:20.671940 | orchestrator | 2026-03-23 00:48:20.671943 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-23 00:48:20.671946 | orchestrator | Monday 23 March 2026 00:46:11 +0000 (0:00:00.728) 0:00:03.158 ********** 2026-03-23 00:48:20.671949 | orchestrator | changed: [testbed-manager] 2026-03-23 00:48:20.671953 | orchestrator | 2026-03-23 00:48:20.671956 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-23 00:48:20.671959 | orchestrator | Monday 23 March 2026 00:46:12 +0000 (0:00:01.753) 0:00:04.911 ********** 2026-03-23 00:48:20.671962 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-23 00:48:20.671965 | orchestrator | ok: [testbed-manager] 2026-03-23 00:48:20.671968 | orchestrator | 2026-03-23 00:48:20.671971 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-23 00:48:20.671975 | orchestrator | Monday 23 March 2026 00:47:07 +0000 (0:00:54.274) 0:00:59.186 ********** 2026-03-23 00:48:20.671978 | orchestrator | changed: [testbed-manager] 2026-03-23 00:48:20.671981 | orchestrator | 2026-03-23 00:48:20.671984 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:48:20.671987 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:48:20.671991 | orchestrator | 2026-03-23 00:48:20.671994 | orchestrator | 2026-03-23 00:48:20.671997 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:48:20.672049 | orchestrator | Monday 23 March 2026 00:47:23 +0000 (0:00:16.573) 0:01:15.759 ********** 2026-03-23 00:48:20.672053 | orchestrator | =============================================================================== 2026-03-23 00:48:20.672056 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 54.27s 2026-03-23 00:48:20.672059 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 16.57s 2026-03-23 00:48:20.672062 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.09s 2026-03-23 00:48:20.672065 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.75s 2026-03-23 00:48:20.672068 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.73s 2026-03-23 00:48:20.672071 | orchestrator | 2026-03-23 00:48:20.672074 | orchestrator | 2026-03-23 00:48:20.672077 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-23 00:48:20.672080 | orchestrator | 2026-03-23 00:48:20.672083 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-23 00:48:20.672086 | orchestrator | Monday 23 March 2026 00:45:43 +0000 (0:00:00.290) 0:00:00.290 ********** 2026-03-23 00:48:20.672090 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:48:20.672093 | orchestrator | 2026-03-23 00:48:20.672096 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-23 00:48:20.672099 | orchestrator | Monday 23 March 2026 00:45:44 +0000 (0:00:01.097) 0:00:01.387 ********** 2026-03-23 00:48:20.672102 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-23 00:48:20.672105 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-23 00:48:20.672108 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-23 00:48:20.672112 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-23 00:48:20.672115 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-23 00:48:20.672118 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-23 00:48:20.672122 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-23 00:48:20.672128 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-23 00:48:20.672133 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-23 00:48:20.672137 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-23 00:48:20.672140 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-23 00:48:20.672143 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-23 00:48:20.672146 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-23 00:48:20.672174 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-23 00:48:20.672178 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-23 00:48:20.672181 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-23 00:48:20.672191 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-23 00:48:20.672195 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-23 00:48:20.672198 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-23 00:48:20.672201 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-23 00:48:20.672204 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-23 00:48:20.672207 | orchestrator | 2026-03-23 00:48:20.672210 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-23 00:48:20.672213 | orchestrator | Monday 23 March 2026 00:45:49 +0000 (0:00:04.251) 0:00:05.639 ********** 2026-03-23 00:48:20.672216 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:48:20.672220 | orchestrator | 2026-03-23 00:48:20.672227 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-23 00:48:20.672230 | orchestrator | Monday 23 March 2026 00:45:50 +0000 (0:00:01.300) 0:00:06.939 ********** 2026-03-23 00:48:20.672235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.672240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.672244 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.672249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.672254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672269 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672277 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.672283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672291 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.672296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.672317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672329 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672334 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672339 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672353 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672358 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672363 | orchestrator | 2026-03-23 00:48:20.672369 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-23 00:48:20.672374 | orchestrator | Monday 23 March 2026 00:45:56 +0000 (0:00:06.379) 0:00:13.319 ********** 2026-03-23 00:48:20.672387 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672393 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672399 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672404 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:48:20.672409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672452 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:48:20.672458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672472 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:48:20.672476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672488 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:48:20.672492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672506 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:48:20.672509 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:48:20.672513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672526 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:48:20.672530 | orchestrator | 2026-03-23 00:48:20.672534 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-23 00:48:20.672537 | orchestrator | Monday 23 March 2026 00:45:58 +0000 (0:00:01.865) 0:00:15.185 ********** 2026-03-23 00:48:20.672541 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672547 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672553 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672556 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:48:20.672560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-too2026-03-23 00:48:20 | INFO  | Task f32e74bf-4b6d-4f7a-9e5e-51be5b0c33e9 is in state SUCCESS 2026-03-23 00:48:20.672824 | orchestrator | lbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672840 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:48:20.672843 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:48:20.672846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672856 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:48:20.672859 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:48:20.672862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672879 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:48:20.672884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-23 00:48:20.672887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.672894 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:48:20.672897 | orchestrator | 2026-03-23 00:48:20.672900 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-23 00:48:20.672903 | orchestrator | Monday 23 March 2026 00:46:01 +0000 (0:00:03.132) 0:00:18.317 ********** 2026-03-23 00:48:20.672906 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:48:20.672909 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:48:20.672913 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:48:20.672916 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:48:20.672919 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:48:20.672922 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:48:20.672925 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:48:20.672928 | orchestrator | 2026-03-23 00:48:20.672931 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-23 00:48:20.672934 | orchestrator | Monday 23 March 2026 00:46:04 +0000 (0:00:02.148) 0:00:20.465 ********** 2026-03-23 00:48:20.672937 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:48:20.672940 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:48:20.672943 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:48:20.672946 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:48:20.672950 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:48:20.672953 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:48:20.672956 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:48:20.672959 | orchestrator | 2026-03-23 00:48:20.672962 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-23 00:48:20.672965 | orchestrator | Monday 23 March 2026 00:46:05 +0000 (0:00:01.747) 0:00:22.213 ********** 2026-03-23 00:48:20.672968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.672973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.672980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.672984 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.672987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.672990 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.672994 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.672997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673000 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673010 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673017 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673020 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673030 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673036 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673050 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673053 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673057 | orchestrator | 2026-03-23 00:48:20.673060 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-23 00:48:20.673063 | orchestrator | Monday 23 March 2026 00:46:12 +0000 (0:00:06.629) 0:00:28.843 ********** 2026-03-23 00:48:20.673066 | orchestrator | [WARNING]: Skipped 2026-03-23 00:48:20.673070 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-23 00:48:20.673073 | orchestrator | to this access issue: 2026-03-23 00:48:20.673085 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-23 00:48:20.673089 | orchestrator | directory 2026-03-23 00:48:20.673092 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 00:48:20.673095 | orchestrator | 2026-03-23 00:48:20.673098 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-23 00:48:20.673101 | orchestrator | Monday 23 March 2026 00:46:13 +0000 (0:00:01.296) 0:00:30.139 ********** 2026-03-23 00:48:20.673104 | orchestrator | [WARNING]: Skipped 2026-03-23 00:48:20.673108 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-23 00:48:20.673111 | orchestrator | to this access issue: 2026-03-23 00:48:20.673114 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-23 00:48:20.673117 | orchestrator | directory 2026-03-23 00:48:20.673120 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 00:48:20.673123 | orchestrator | 2026-03-23 00:48:20.673126 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-23 00:48:20.673132 | orchestrator | Monday 23 March 2026 00:46:14 +0000 (0:00:01.276) 0:00:31.415 ********** 2026-03-23 00:48:20.673135 | orchestrator | [WARNING]: Skipped 2026-03-23 00:48:20.673138 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-23 00:48:20.673141 | orchestrator | to this access issue: 2026-03-23 00:48:20.673144 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-23 00:48:20.673147 | orchestrator | directory 2026-03-23 00:48:20.673150 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 00:48:20.673153 | orchestrator | 2026-03-23 00:48:20.673156 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-23 00:48:20.673159 | orchestrator | Monday 23 March 2026 00:46:16 +0000 (0:00:01.300) 0:00:32.716 ********** 2026-03-23 00:48:20.673162 | orchestrator | [WARNING]: Skipped 2026-03-23 00:48:20.673166 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-23 00:48:20.673169 | orchestrator | to this access issue: 2026-03-23 00:48:20.673172 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-23 00:48:20.673175 | orchestrator | directory 2026-03-23 00:48:20.673178 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 00:48:20.673181 | orchestrator | 2026-03-23 00:48:20.673184 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-23 00:48:20.673189 | orchestrator | Monday 23 March 2026 00:46:17 +0000 (0:00:01.152) 0:00:33.869 ********** 2026-03-23 00:48:20.673192 | orchestrator | changed: [testbed-manager] 2026-03-23 00:48:20.673195 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:48:20.673201 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:48:20.673206 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:48:20.673211 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:48:20.673216 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:48:20.673221 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:48:20.673226 | orchestrator | 2026-03-23 00:48:20.673230 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-23 00:48:20.673235 | orchestrator | Monday 23 March 2026 00:46:21 +0000 (0:00:04.121) 0:00:37.990 ********** 2026-03-23 00:48:20.673239 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-23 00:48:20.673244 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-23 00:48:20.673248 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-23 00:48:20.673255 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-23 00:48:20.673259 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-23 00:48:20.673264 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-23 00:48:20.673270 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-23 00:48:20.673275 | orchestrator | 2026-03-23 00:48:20.673279 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-23 00:48:20.673282 | orchestrator | Monday 23 March 2026 00:46:24 +0000 (0:00:02.936) 0:00:40.927 ********** 2026-03-23 00:48:20.673285 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:48:20.673288 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:48:20.673291 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:48:20.673294 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:48:20.673297 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:48:20.673300 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:48:20.673303 | orchestrator | changed: [testbed-manager] 2026-03-23 00:48:20.673306 | orchestrator | 2026-03-23 00:48:20.673309 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-23 00:48:20.673315 | orchestrator | Monday 23 March 2026 00:46:28 +0000 (0:00:03.940) 0:00:44.868 ********** 2026-03-23 00:48:20.673318 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.673325 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.673333 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673339 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673343 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673346 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.673351 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.673358 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.673366 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.673376 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:48:20.673383 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673386 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673389 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673393 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673397 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673401 | orchestrator | 2026-03-23 00:48:20.673406 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-23 00:48:20.673411 | orchestrator | Monday 23 March 2026 00:46:31 +0000 (0:00:02.914) 0:00:47.783 ********** 2026-03-23 00:48:20.673416 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-23 00:48:20.673421 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-23 00:48:20.673427 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-23 00:48:20.673433 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-23 00:48:20.673439 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-23 00:48:20.673442 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-23 00:48:20.673445 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-23 00:48:20.673448 | orchestrator | 2026-03-23 00:48:20.673451 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-23 00:48:20.673454 | orchestrator | Monday 23 March 2026 00:46:34 +0000 (0:00:02.912) 0:00:50.695 ********** 2026-03-23 00:48:20.673458 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-23 00:48:20.673461 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-23 00:48:20.673465 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-23 00:48:20.673468 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-23 00:48:20.673472 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-23 00:48:20.673475 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-23 00:48:20.673478 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-23 00:48:20.673482 | orchestrator | 2026-03-23 00:48:20.673485 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-23 00:48:20.673489 | orchestrator | Monday 23 March 2026 00:46:37 +0000 (0:00:02.835) 0:00:53.531 ********** 2026-03-23 00:48:20.673492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673496 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673505 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673516 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-23 00:48:20.673536 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673549 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673560 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673567 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673580 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:48:20.673583 | orchestrator | 2026-03-23 00:48:20.673587 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-23 00:48:20.673591 | orchestrator | Monday 23 March 2026 00:46:40 +0000 (0:00:03.034) 0:00:56.565 ********** 2026-03-23 00:48:20.673594 | orchestrator | changed: [testbed-manager] 2026-03-23 00:48:20.673597 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:48:20.673600 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:48:20.673603 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:48:20.673606 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:48:20.673609 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:48:20.673612 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:48:20.673615 | orchestrator | 2026-03-23 00:48:20.673618 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-23 00:48:20.673621 | orchestrator | Monday 23 March 2026 00:46:41 +0000 (0:00:01.645) 0:00:58.211 ********** 2026-03-23 00:48:20.673624 | orchestrator | changed: [testbed-manager] 2026-03-23 00:48:20.673629 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:48:20.673634 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:48:20.673640 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:48:20.673647 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:48:20.673652 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:48:20.673656 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:48:20.673661 | orchestrator | 2026-03-23 00:48:20.673666 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-23 00:48:20.673671 | orchestrator | Monday 23 March 2026 00:46:43 +0000 (0:00:01.354) 0:00:59.565 ********** 2026-03-23 00:48:20.673675 | orchestrator | 2026-03-23 00:48:20.673680 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-23 00:48:20.673685 | orchestrator | Monday 23 March 2026 00:46:43 +0000 (0:00:00.077) 0:00:59.643 ********** 2026-03-23 00:48:20.673690 | orchestrator | 2026-03-23 00:48:20.673695 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-23 00:48:20.673700 | orchestrator | Monday 23 March 2026 00:46:43 +0000 (0:00:00.107) 0:00:59.750 ********** 2026-03-23 00:48:20.673705 | orchestrator | 2026-03-23 00:48:20.673710 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-23 00:48:20.673715 | orchestrator | Monday 23 March 2026 00:46:43 +0000 (0:00:00.090) 0:00:59.841 ********** 2026-03-23 00:48:20.673733 | orchestrator | 2026-03-23 00:48:20.673737 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-23 00:48:20.673740 | orchestrator | Monday 23 March 2026 00:46:43 +0000 (0:00:00.066) 0:00:59.908 ********** 2026-03-23 00:48:20.673743 | orchestrator | 2026-03-23 00:48:20.673746 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-23 00:48:20.673749 | orchestrator | Monday 23 March 2026 00:46:43 +0000 (0:00:00.060) 0:00:59.969 ********** 2026-03-23 00:48:20.673752 | orchestrator | 2026-03-23 00:48:20.673755 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-23 00:48:20.673758 | orchestrator | Monday 23 March 2026 00:46:43 +0000 (0:00:00.059) 0:01:00.028 ********** 2026-03-23 00:48:20.673765 | orchestrator | 2026-03-23 00:48:20.673768 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-23 00:48:20.673771 | orchestrator | Monday 23 March 2026 00:46:43 +0000 (0:00:00.085) 0:01:00.113 ********** 2026-03-23 00:48:20.673774 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:48:20.673777 | orchestrator | changed: [testbed-manager] 2026-03-23 00:48:20.673780 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:48:20.673783 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:48:20.673786 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:48:20.673789 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:48:20.673792 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:48:20.673795 | orchestrator | 2026-03-23 00:48:20.673799 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-23 00:48:20.673802 | orchestrator | Monday 23 March 2026 00:47:11 +0000 (0:00:27.764) 0:01:27.877 ********** 2026-03-23 00:48:20.673815 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:48:20.673818 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:48:20.673821 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:48:20.673824 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:48:20.673827 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:48:20.673830 | orchestrator | changed: [testbed-manager] 2026-03-23 00:48:20.673833 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:48:20.673836 | orchestrator | 2026-03-23 00:48:20.673839 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-23 00:48:20.673843 | orchestrator | Monday 23 March 2026 00:48:07 +0000 (0:00:55.683) 0:02:23.561 ********** 2026-03-23 00:48:20.673846 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:48:20.673849 | orchestrator | ok: [testbed-manager] 2026-03-23 00:48:20.673852 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:48:20.673855 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:48:20.673858 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:48:20.673861 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:48:20.673864 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:48:20.673867 | orchestrator | 2026-03-23 00:48:20.673870 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-23 00:48:20.673873 | orchestrator | Monday 23 March 2026 00:48:09 +0000 (0:00:02.118) 0:02:25.680 ********** 2026-03-23 00:48:20.673876 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:48:20.673879 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:48:20.673882 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:48:20.673885 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:48:20.673888 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:48:20.673891 | orchestrator | changed: [testbed-manager] 2026-03-23 00:48:20.673894 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:48:20.673897 | orchestrator | 2026-03-23 00:48:20.673900 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:48:20.673904 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-23 00:48:20.673907 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-23 00:48:20.673913 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-23 00:48:20.673916 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-23 00:48:20.673919 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-23 00:48:20.673922 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-23 00:48:20.673928 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-23 00:48:20.673931 | orchestrator | 2026-03-23 00:48:20.673934 | orchestrator | 2026-03-23 00:48:20.673937 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:48:20.673940 | orchestrator | Monday 23 March 2026 00:48:18 +0000 (0:00:09.612) 0:02:35.293 ********** 2026-03-23 00:48:20.673946 | orchestrator | =============================================================================== 2026-03-23 00:48:20.673956 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 55.68s 2026-03-23 00:48:20.673960 | orchestrator | common : Restart fluentd container ------------------------------------- 27.76s 2026-03-23 00:48:20.673967 | orchestrator | common : Restart cron container ----------------------------------------- 9.61s 2026-03-23 00:48:20.673970 | orchestrator | common : Copying over config.json files for services -------------------- 6.63s 2026-03-23 00:48:20.673973 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.38s 2026-03-23 00:48:20.673976 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.25s 2026-03-23 00:48:20.673979 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.12s 2026-03-23 00:48:20.673982 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.94s 2026-03-23 00:48:20.673985 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.13s 2026-03-23 00:48:20.673989 | orchestrator | common : Check common containers ---------------------------------------- 3.03s 2026-03-23 00:48:20.673992 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.94s 2026-03-23 00:48:20.673995 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.91s 2026-03-23 00:48:20.673998 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.91s 2026-03-23 00:48:20.674001 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.84s 2026-03-23 00:48:20.674004 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 2.15s 2026-03-23 00:48:20.674007 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.12s 2026-03-23 00:48:20.674010 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.87s 2026-03-23 00:48:20.674046 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.75s 2026-03-23 00:48:20.674050 | orchestrator | common : Creating log volume -------------------------------------------- 1.65s 2026-03-23 00:48:20.674053 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.35s 2026-03-23 00:48:20.674056 | orchestrator | 2026-03-23 00:48:20 | INFO  | Task da92febf-fb64-4611-93f1-1ba764c583ce is in state STARTED 2026-03-23 00:48:20.674060 | orchestrator | 2026-03-23 00:48:20 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:20.674257 | orchestrator | 2026-03-23 00:48:20 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:48:20.674292 | orchestrator | 2026-03-23 00:48:20 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:48:20.674639 | orchestrator | 2026-03-23 00:48:20 | INFO  | Task 6a54fa33-67a6-49e1-b05a-980142dfe458 is in state STARTED 2026-03-23 00:48:20.675583 | orchestrator | 2026-03-23 00:48:20 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:20.675612 | orchestrator | 2026-03-23 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:23.706292 | orchestrator | 2026-03-23 00:48:23 | INFO  | Task da92febf-fb64-4611-93f1-1ba764c583ce is in state STARTED 2026-03-23 00:48:23.707203 | orchestrator | 2026-03-23 00:48:23 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:23.708095 | orchestrator | 2026-03-23 00:48:23 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:48:23.708890 | orchestrator | 2026-03-23 00:48:23 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:48:23.710547 | orchestrator | 2026-03-23 00:48:23 | INFO  | Task 6a54fa33-67a6-49e1-b05a-980142dfe458 is in state STARTED 2026-03-23 00:48:23.711349 | orchestrator | 2026-03-23 00:48:23 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:23.711478 | orchestrator | 2026-03-23 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:26.739164 | orchestrator | 2026-03-23 00:48:26 | INFO  | Task da92febf-fb64-4611-93f1-1ba764c583ce is in state STARTED 2026-03-23 00:48:26.739611 | orchestrator | 2026-03-23 00:48:26 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:26.740094 | orchestrator | 2026-03-23 00:48:26 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:48:26.740671 | orchestrator | 2026-03-23 00:48:26 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:48:26.741400 | orchestrator | 2026-03-23 00:48:26 | INFO  | Task 6a54fa33-67a6-49e1-b05a-980142dfe458 is in state STARTED 2026-03-23 00:48:26.741945 | orchestrator | 2026-03-23 00:48:26 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:26.741966 | orchestrator | 2026-03-23 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:29.769332 | orchestrator | 2026-03-23 00:48:29 | INFO  | Task da92febf-fb64-4611-93f1-1ba764c583ce is in state STARTED 2026-03-23 00:48:29.769420 | orchestrator | 2026-03-23 00:48:29 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:29.770601 | orchestrator | 2026-03-23 00:48:29 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:48:29.771851 | orchestrator | 2026-03-23 00:48:29 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:48:29.772547 | orchestrator | 2026-03-23 00:48:29 | INFO  | Task 6a54fa33-67a6-49e1-b05a-980142dfe458 is in state STARTED 2026-03-23 00:48:29.773479 | orchestrator | 2026-03-23 00:48:29 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:29.773507 | orchestrator | 2026-03-23 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:32.799568 | orchestrator | 2026-03-23 00:48:32 | INFO  | Task da92febf-fb64-4611-93f1-1ba764c583ce is in state STARTED 2026-03-23 00:48:32.800985 | orchestrator | 2026-03-23 00:48:32 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:32.801429 | orchestrator | 2026-03-23 00:48:32 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:48:32.802284 | orchestrator | 2026-03-23 00:48:32 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:48:32.804456 | orchestrator | 2026-03-23 00:48:32 | INFO  | Task 6a54fa33-67a6-49e1-b05a-980142dfe458 is in state STARTED 2026-03-23 00:48:32.804992 | orchestrator | 2026-03-23 00:48:32 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:32.805012 | orchestrator | 2026-03-23 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:35.834128 | orchestrator | 2026-03-23 00:48:35 | INFO  | Task da92febf-fb64-4611-93f1-1ba764c583ce is in state STARTED 2026-03-23 00:48:35.835325 | orchestrator | 2026-03-23 00:48:35 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:35.836183 | orchestrator | 2026-03-23 00:48:35 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:48:35.838005 | orchestrator | 2026-03-23 00:48:35 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:48:35.838281 | orchestrator | 2026-03-23 00:48:35 | INFO  | Task 6a54fa33-67a6-49e1-b05a-980142dfe458 is in state STARTED 2026-03-23 00:48:35.839800 | orchestrator | 2026-03-23 00:48:35 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:35.839899 | orchestrator | 2026-03-23 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:38.862225 | orchestrator | 2026-03-23 00:48:38 | INFO  | Task da92febf-fb64-4611-93f1-1ba764c583ce is in state STARTED 2026-03-23 00:48:38.862909 | orchestrator | 2026-03-23 00:48:38 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:38.863412 | orchestrator | 2026-03-23 00:48:38 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:48:38.864123 | orchestrator | 2026-03-23 00:48:38 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:48:38.864644 | orchestrator | 2026-03-23 00:48:38 | INFO  | Task 6a54fa33-67a6-49e1-b05a-980142dfe458 is in state SUCCESS 2026-03-23 00:48:38.865382 | orchestrator | 2026-03-23 00:48:38 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:48:38.866191 | orchestrator | 2026-03-23 00:48:38 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:38.866376 | orchestrator | 2026-03-23 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:41.912158 | orchestrator | 2026-03-23 00:48:41 | INFO  | Task da92febf-fb64-4611-93f1-1ba764c583ce is in state STARTED 2026-03-23 00:48:41.912242 | orchestrator | 2026-03-23 00:48:41 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:41.912640 | orchestrator | 2026-03-23 00:48:41 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:48:41.917645 | orchestrator | 2026-03-23 00:48:41 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:48:41.918307 | orchestrator | 2026-03-23 00:48:41 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:48:41.918914 | orchestrator | 2026-03-23 00:48:41 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:41.918939 | orchestrator | 2026-03-23 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:44.967634 | orchestrator | 2026-03-23 00:48:44 | INFO  | Task da92febf-fb64-4611-93f1-1ba764c583ce is in state STARTED 2026-03-23 00:48:44.968982 | orchestrator | 2026-03-23 00:48:44 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:44.972647 | orchestrator | 2026-03-23 00:48:44 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:48:44.972707 | orchestrator | 2026-03-23 00:48:44 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:48:44.974285 | orchestrator | 2026-03-23 00:48:44 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:48:44.974331 | orchestrator | 2026-03-23 00:48:44 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:44.974338 | orchestrator | 2026-03-23 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:48.015508 | orchestrator | 2026-03-23 00:48:48 | INFO  | Task da92febf-fb64-4611-93f1-1ba764c583ce is in state STARTED 2026-03-23 00:48:48.027351 | orchestrator | 2026-03-23 00:48:48 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:48.028945 | orchestrator | 2026-03-23 00:48:48 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:48:48.030346 | orchestrator | 2026-03-23 00:48:48 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:48:48.031084 | orchestrator | 2026-03-23 00:48:48 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:48:48.031986 | orchestrator | 2026-03-23 00:48:48 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:48.032019 | orchestrator | 2026-03-23 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:51.080798 | orchestrator | 2026-03-23 00:48:51.080851 | orchestrator | 2026-03-23 00:48:51.080856 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 00:48:51.080860 | orchestrator | 2026-03-23 00:48:51.080864 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 00:48:51.080870 | orchestrator | Monday 23 March 2026 00:48:22 +0000 (0:00:00.253) 0:00:00.253 ********** 2026-03-23 00:48:51.080875 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:48:51.080882 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:48:51.080887 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:48:51.080892 | orchestrator | 2026-03-23 00:48:51.080899 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 00:48:51.080906 | orchestrator | Monday 23 March 2026 00:48:22 +0000 (0:00:00.341) 0:00:00.595 ********** 2026-03-23 00:48:51.080912 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-23 00:48:51.080917 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-23 00:48:51.080932 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-23 00:48:51.080938 | orchestrator | 2026-03-23 00:48:51.080944 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-23 00:48:51.080949 | orchestrator | 2026-03-23 00:48:51.080954 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-23 00:48:51.080960 | orchestrator | Monday 23 March 2026 00:48:22 +0000 (0:00:00.372) 0:00:00.967 ********** 2026-03-23 00:48:51.080965 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:48:51.080971 | orchestrator | 2026-03-23 00:48:51.080977 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-23 00:48:51.080982 | orchestrator | Monday 23 March 2026 00:48:23 +0000 (0:00:00.775) 0:00:01.742 ********** 2026-03-23 00:48:51.080987 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-23 00:48:51.081023 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-23 00:48:51.081031 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-23 00:48:51.081038 | orchestrator | 2026-03-23 00:48:51.081046 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-23 00:48:51.081054 | orchestrator | Monday 23 March 2026 00:48:25 +0000 (0:00:01.583) 0:00:03.325 ********** 2026-03-23 00:48:51.081061 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-23 00:48:51.081069 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-23 00:48:51.081076 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-23 00:48:51.081084 | orchestrator | 2026-03-23 00:48:51.081093 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-23 00:48:51.081099 | orchestrator | Monday 23 March 2026 00:48:26 +0000 (0:00:01.696) 0:00:05.022 ********** 2026-03-23 00:48:51.081106 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:48:51.081115 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:48:51.081121 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:48:51.081128 | orchestrator | 2026-03-23 00:48:51.081136 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-23 00:48:51.081158 | orchestrator | Monday 23 March 2026 00:48:28 +0000 (0:00:01.649) 0:00:06.672 ********** 2026-03-23 00:48:51.081167 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:48:51.081174 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:48:51.081181 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:48:51.081189 | orchestrator | 2026-03-23 00:48:51.081196 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:48:51.081205 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:48:51.081214 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:48:51.081222 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:48:51.081230 | orchestrator | 2026-03-23 00:48:51.081237 | orchestrator | 2026-03-23 00:48:51.081245 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:48:51.081251 | orchestrator | Monday 23 March 2026 00:48:36 +0000 (0:00:07.945) 0:00:14.617 ********** 2026-03-23 00:48:51.081260 | orchestrator | =============================================================================== 2026-03-23 00:48:51.081268 | orchestrator | memcached : Restart memcached container --------------------------------- 7.95s 2026-03-23 00:48:51.081275 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.70s 2026-03-23 00:48:51.081284 | orchestrator | memcached : Check memcached container ----------------------------------- 1.65s 2026-03-23 00:48:51.081293 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.58s 2026-03-23 00:48:51.081301 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.78s 2026-03-23 00:48:51.081308 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2026-03-23 00:48:51.081315 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-03-23 00:48:51.081323 | orchestrator | 2026-03-23 00:48:51.081330 | orchestrator | 2026-03-23 00:48:51.081339 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 00:48:51.081358 | orchestrator | 2026-03-23 00:48:51.081368 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 00:48:51.081373 | orchestrator | Monday 23 March 2026 00:48:22 +0000 (0:00:00.370) 0:00:00.370 ********** 2026-03-23 00:48:51.081378 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:48:51.081385 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:48:51.081391 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:48:51.081396 | orchestrator | 2026-03-23 00:48:51.081402 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 00:48:51.081422 | orchestrator | Monday 23 March 2026 00:48:23 +0000 (0:00:00.532) 0:00:00.902 ********** 2026-03-23 00:48:51.081429 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-23 00:48:51.081437 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-23 00:48:51.081444 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-23 00:48:51.081451 | orchestrator | 2026-03-23 00:48:51.081459 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-23 00:48:51.081465 | orchestrator | 2026-03-23 00:48:51.081473 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-23 00:48:51.081480 | orchestrator | Monday 23 March 2026 00:48:23 +0000 (0:00:00.352) 0:00:01.254 ********** 2026-03-23 00:48:51.081487 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:48:51.081494 | orchestrator | 2026-03-23 00:48:51.081502 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-23 00:48:51.081515 | orchestrator | Monday 23 March 2026 00:48:23 +0000 (0:00:00.523) 0:00:01.778 ********** 2026-03-23 00:48:51.081532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081585 | orchestrator | 2026-03-23 00:48:51.081590 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-23 00:48:51.081595 | orchestrator | Monday 23 March 2026 00:48:26 +0000 (0:00:02.442) 0:00:04.220 ********** 2026-03-23 00:48:51.081609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081648 | orchestrator | 2026-03-23 00:48:51.081653 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-23 00:48:51.081662 | orchestrator | Monday 23 March 2026 00:48:28 +0000 (0:00:02.558) 0:00:06.778 ********** 2026-03-23 00:48:51.081671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081724 | orchestrator | 2026-03-23 00:48:51.081734 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-23 00:48:51.081743 | orchestrator | Monday 23 March 2026 00:48:31 +0000 (0:00:02.304) 0:00:09.083 ********** 2026-03-23 00:48:51.081749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-23 00:48:51.081785 | orchestrator | 2026-03-23 00:48:51.081794 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-23 00:48:51.081799 | orchestrator | Monday 23 March 2026 00:48:32 +0000 (0:00:01.538) 0:00:10.622 ********** 2026-03-23 00:48:51.081804 | orchestrator | 2026-03-23 00:48:51.081809 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-23 00:48:51.081817 | orchestrator | Monday 23 March 2026 00:48:32 +0000 (0:00:00.187) 0:00:10.810 ********** 2026-03-23 00:48:51.081823 | orchestrator | 2026-03-23 00:48:51.081828 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-23 00:48:51.081833 | orchestrator | Monday 23 March 2026 00:48:33 +0000 (0:00:00.128) 0:00:10.938 ********** 2026-03-23 00:48:51.081838 | orchestrator | 2026-03-23 00:48:51.081844 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-23 00:48:51.081849 | orchestrator | Monday 23 March 2026 00:48:33 +0000 (0:00:00.113) 0:00:11.052 ********** 2026-03-23 00:48:51.081854 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:48:51.081859 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:48:51.081864 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:48:51.081870 | orchestrator | 2026-03-23 00:48:51.081875 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-23 00:48:51.081880 | orchestrator | Monday 23 March 2026 00:48:40 +0000 (0:00:07.672) 0:00:18.725 ********** 2026-03-23 00:48:51.081889 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:48:51.081895 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:48:51.081900 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:48:51.081906 | orchestrator | 2026-03-23 00:48:51.081911 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:48:51.081916 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:48:51.081922 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:48:51.081927 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:48:51.081933 | orchestrator | 2026-03-23 00:48:51.081938 | orchestrator | 2026-03-23 00:48:51.081943 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:48:51.081948 | orchestrator | Monday 23 March 2026 00:48:48 +0000 (0:00:07.504) 0:00:26.229 ********** 2026-03-23 00:48:51.081954 | orchestrator | =============================================================================== 2026-03-23 00:48:51.081959 | orchestrator | redis : Restart redis container ----------------------------------------- 7.67s 2026-03-23 00:48:51.081964 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.50s 2026-03-23 00:48:51.081969 | orchestrator | redis : Copying over default config.json files -------------------------- 2.56s 2026-03-23 00:48:51.081975 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.44s 2026-03-23 00:48:51.081980 | orchestrator | redis : Copying over redis config files --------------------------------- 2.31s 2026-03-23 00:48:51.081985 | orchestrator | redis : Check redis containers ------------------------------------------ 1.54s 2026-03-23 00:48:51.081990 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2026-03-23 00:48:51.081995 | orchestrator | redis : include_tasks --------------------------------------------------- 0.52s 2026-03-23 00:48:51.082001 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.43s 2026-03-23 00:48:51.082006 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-03-23 00:48:51.082047 | orchestrator | 2026-03-23 00:48:51 | INFO  | Task da92febf-fb64-4611-93f1-1ba764c583ce is in state SUCCESS 2026-03-23 00:48:51.082054 | orchestrator | 2026-03-23 00:48:51 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:51.082060 | orchestrator | 2026-03-23 00:48:51 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:48:51.082138 | orchestrator | 2026-03-23 00:48:51 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:48:51.085957 | orchestrator | 2026-03-23 00:48:51 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:48:51.085990 | orchestrator | 2026-03-23 00:48:51 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:51.085997 | orchestrator | 2026-03-23 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:54.123282 | orchestrator | 2026-03-23 00:48:54 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:54.124267 | orchestrator | 2026-03-23 00:48:54 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:48:54.124921 | orchestrator | 2026-03-23 00:48:54 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:48:54.125610 | orchestrator | 2026-03-23 00:48:54 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:48:54.126334 | orchestrator | 2026-03-23 00:48:54 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:54.126414 | orchestrator | 2026-03-23 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:48:57.313908 | orchestrator | 2026-03-23 00:48:57 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:48:57.316025 | orchestrator | 2026-03-23 00:48:57 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:48:57.316055 | orchestrator | 2026-03-23 00:48:57 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:48:57.316060 | orchestrator | 2026-03-23 00:48:57 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:48:57.316435 | orchestrator | 2026-03-23 00:48:57 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:48:57.316442 | orchestrator | 2026-03-23 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:00.393299 | orchestrator | 2026-03-23 00:49:00 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:00.394139 | orchestrator | 2026-03-23 00:49:00 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:49:00.396343 | orchestrator | 2026-03-23 00:49:00 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:00.398670 | orchestrator | 2026-03-23 00:49:00 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:00.400144 | orchestrator | 2026-03-23 00:49:00 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:00.400181 | orchestrator | 2026-03-23 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:03.481575 | orchestrator | 2026-03-23 00:49:03 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:03.484776 | orchestrator | 2026-03-23 00:49:03 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:49:03.486063 | orchestrator | 2026-03-23 00:49:03 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:03.487991 | orchestrator | 2026-03-23 00:49:03 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:03.489871 | orchestrator | 2026-03-23 00:49:03 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:03.489909 | orchestrator | 2026-03-23 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:06.518206 | orchestrator | 2026-03-23 00:49:06 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:06.519292 | orchestrator | 2026-03-23 00:49:06 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:49:06.520015 | orchestrator | 2026-03-23 00:49:06 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:06.521172 | orchestrator | 2026-03-23 00:49:06 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:06.521865 | orchestrator | 2026-03-23 00:49:06 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:06.523138 | orchestrator | 2026-03-23 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:09.550325 | orchestrator | 2026-03-23 00:49:09 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:09.551760 | orchestrator | 2026-03-23 00:49:09 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:49:09.553638 | orchestrator | 2026-03-23 00:49:09 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:09.554677 | orchestrator | 2026-03-23 00:49:09 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:09.556441 | orchestrator | 2026-03-23 00:49:09 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:09.556489 | orchestrator | 2026-03-23 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:12.582345 | orchestrator | 2026-03-23 00:49:12 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:12.582534 | orchestrator | 2026-03-23 00:49:12 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:49:12.583301 | orchestrator | 2026-03-23 00:49:12 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:12.584001 | orchestrator | 2026-03-23 00:49:12 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:12.584745 | orchestrator | 2026-03-23 00:49:12 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:12.584765 | orchestrator | 2026-03-23 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:15.615832 | orchestrator | 2026-03-23 00:49:15 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:15.616000 | orchestrator | 2026-03-23 00:49:15 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:49:15.618549 | orchestrator | 2026-03-23 00:49:15 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:15.619311 | orchestrator | 2026-03-23 00:49:15 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:15.620081 | orchestrator | 2026-03-23 00:49:15 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:15.620108 | orchestrator | 2026-03-23 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:18.649639 | orchestrator | 2026-03-23 00:49:18 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:18.649878 | orchestrator | 2026-03-23 00:49:18 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:49:18.650385 | orchestrator | 2026-03-23 00:49:18 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:18.651222 | orchestrator | 2026-03-23 00:49:18 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:18.651995 | orchestrator | 2026-03-23 00:49:18 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:18.652217 | orchestrator | 2026-03-23 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:21.677465 | orchestrator | 2026-03-23 00:49:21 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:21.677645 | orchestrator | 2026-03-23 00:49:21 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state STARTED 2026-03-23 00:49:21.678551 | orchestrator | 2026-03-23 00:49:21 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:21.679072 | orchestrator | 2026-03-23 00:49:21 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:21.680163 | orchestrator | 2026-03-23 00:49:21 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:21.680192 | orchestrator | 2026-03-23 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:24.713122 | orchestrator | 2026-03-23 00:49:24 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:24.714155 | orchestrator | 2026-03-23 00:49:24 | INFO  | Task b92f8747-96aa-4361-9d4b-777bbf000f09 is in state SUCCESS 2026-03-23 00:49:24.715259 | orchestrator | 2026-03-23 00:49:24.715288 | orchestrator | 2026-03-23 00:49:24.715294 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 00:49:24.715298 | orchestrator | 2026-03-23 00:49:24.715302 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 00:49:24.715306 | orchestrator | Monday 23 March 2026 00:48:22 +0000 (0:00:00.475) 0:00:00.476 ********** 2026-03-23 00:49:24.715310 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:49:24.715314 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:49:24.715318 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:49:24.715322 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:49:24.715325 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:49:24.715329 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:49:24.715332 | orchestrator | 2026-03-23 00:49:24.715337 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 00:49:24.715340 | orchestrator | Monday 23 March 2026 00:48:23 +0000 (0:00:00.816) 0:00:01.292 ********** 2026-03-23 00:49:24.715344 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-23 00:49:24.715356 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-23 00:49:24.715360 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-23 00:49:24.715367 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-23 00:49:24.715371 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-23 00:49:24.715375 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-23 00:49:24.715378 | orchestrator | 2026-03-23 00:49:24.715382 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-23 00:49:24.715386 | orchestrator | 2026-03-23 00:49:24.715389 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-23 00:49:24.715393 | orchestrator | Monday 23 March 2026 00:48:24 +0000 (0:00:01.271) 0:00:02.564 ********** 2026-03-23 00:49:24.715397 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:49:24.715402 | orchestrator | 2026-03-23 00:49:24.715405 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-23 00:49:24.715409 | orchestrator | Monday 23 March 2026 00:48:26 +0000 (0:00:01.293) 0:00:03.857 ********** 2026-03-23 00:49:24.715423 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-23 00:49:24.715427 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-23 00:49:24.715430 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-23 00:49:24.715434 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-23 00:49:24.715438 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-23 00:49:24.715441 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-23 00:49:24.715445 | orchestrator | 2026-03-23 00:49:24.715449 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-23 00:49:24.715452 | orchestrator | Monday 23 March 2026 00:48:27 +0000 (0:00:01.607) 0:00:05.464 ********** 2026-03-23 00:49:24.715456 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-23 00:49:24.715460 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-23 00:49:24.715463 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-23 00:49:24.715467 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-23 00:49:24.715470 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-23 00:49:24.715474 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-23 00:49:24.715478 | orchestrator | 2026-03-23 00:49:24.715484 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-23 00:49:24.715488 | orchestrator | Monday 23 March 2026 00:48:29 +0000 (0:00:01.592) 0:00:07.057 ********** 2026-03-23 00:49:24.715531 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-23 00:49:24.715536 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:49:24.715540 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-23 00:49:24.715544 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:49:24.715548 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-23 00:49:24.715551 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:49:24.715555 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-23 00:49:24.715558 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:49:24.715562 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-23 00:49:24.715566 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:49:24.715570 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-23 00:49:24.715573 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:49:24.715577 | orchestrator | 2026-03-23 00:49:24.715581 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-23 00:49:24.715584 | orchestrator | Monday 23 March 2026 00:48:30 +0000 (0:00:01.038) 0:00:08.096 ********** 2026-03-23 00:49:24.715588 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:49:24.715592 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:49:24.715595 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:49:24.715599 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:49:24.715603 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:49:24.715606 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:49:24.715610 | orchestrator | 2026-03-23 00:49:24.715614 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-23 00:49:24.715617 | orchestrator | Monday 23 March 2026 00:48:30 +0000 (0:00:00.622) 0:00:08.718 ********** 2026-03-23 00:49:24.715631 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715655 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715659 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715714 | orchestrator | 2026-03-23 00:49:24.715719 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-23 00:49:24.715731 | orchestrator | Monday 23 March 2026 00:48:32 +0000 (0:00:01.638) 0:00:10.357 ********** 2026-03-23 00:49:24.715737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715743 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715762 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715796 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715846 | orchestrator | 2026-03-23 00:49:24.715852 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-23 00:49:24.715859 | orchestrator | Monday 23 March 2026 00:48:35 +0000 (0:00:02.595) 0:00:12.952 ********** 2026-03-23 00:49:24.715863 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:49:24.715867 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:49:24.715871 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:49:24.715876 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:49:24.715880 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:49:24.715884 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:49:24.715888 | orchestrator | 2026-03-23 00:49:24.715892 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-23 00:49:24.715896 | orchestrator | Monday 23 March 2026 00:48:36 +0000 (0:00:00.910) 0:00:13.863 ********** 2026-03-23 00:49:24.715901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715905 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715964 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715977 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.715988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.716002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.716009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-23 00:49:24.716015 | orchestrator | 2026-03-23 00:49:24.716021 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-23 00:49:24.716028 | orchestrator | Monday 23 March 2026 00:48:38 +0000 (0:00:02.117) 0:00:15.980 ********** 2026-03-23 00:49:24.716034 | orchestrator | 2026-03-23 00:49:24.716039 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-23 00:49:24.716043 | orchestrator | Monday 23 March 2026 00:48:38 +0000 (0:00:00.128) 0:00:16.108 ********** 2026-03-23 00:49:24.716047 | orchestrator | 2026-03-23 00:49:24.716050 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-23 00:49:24.716054 | orchestrator | Monday 23 March 2026 00:48:38 +0000 (0:00:00.125) 0:00:16.234 ********** 2026-03-23 00:49:24.716057 | orchestrator | 2026-03-23 00:49:24.716061 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-23 00:49:24.716065 | orchestrator | Monday 23 March 2026 00:48:38 +0000 (0:00:00.340) 0:00:16.574 ********** 2026-03-23 00:49:24.716068 | orchestrator | 2026-03-23 00:49:24.716072 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-23 00:49:24.716076 | orchestrator | Monday 23 March 2026 00:48:39 +0000 (0:00:00.265) 0:00:16.840 ********** 2026-03-23 00:49:24.716079 | orchestrator | 2026-03-23 00:49:24.716083 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-23 00:49:24.716087 | orchestrator | Monday 23 March 2026 00:48:39 +0000 (0:00:00.164) 0:00:17.004 ********** 2026-03-23 00:49:24.716090 | orchestrator | 2026-03-23 00:49:24.716094 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-23 00:49:24.716098 | orchestrator | Monday 23 March 2026 00:48:39 +0000 (0:00:00.119) 0:00:17.123 ********** 2026-03-23 00:49:24.716101 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:49:24.716105 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:49:24.716109 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:49:24.716116 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:49:24.716120 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:49:24.716123 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:49:24.716127 | orchestrator | 2026-03-23 00:49:24.716134 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-23 00:49:24.716138 | orchestrator | Monday 23 March 2026 00:48:48 +0000 (0:00:09.547) 0:00:26.671 ********** 2026-03-23 00:49:24.716141 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:49:24.716145 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:49:24.716148 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:49:24.716152 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:49:24.716186 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:49:24.716190 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:49:24.716194 | orchestrator | 2026-03-23 00:49:24.716205 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-23 00:49:24.716208 | orchestrator | Monday 23 March 2026 00:48:51 +0000 (0:00:02.392) 0:00:29.064 ********** 2026-03-23 00:49:24.716212 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:49:24.716216 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:49:24.716219 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:49:24.716223 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:49:24.716227 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:49:24.716234 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:49:24.716238 | orchestrator | 2026-03-23 00:49:24.716242 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-23 00:49:24.716245 | orchestrator | Monday 23 March 2026 00:49:01 +0000 (0:00:10.436) 0:00:39.500 ********** 2026-03-23 00:49:24.716249 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-23 00:49:24.716253 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-23 00:49:24.716257 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-23 00:49:24.716260 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-23 00:49:24.716264 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-23 00:49:24.716271 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-23 00:49:24.716275 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-23 00:49:24.716278 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-23 00:49:24.716282 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-23 00:49:24.716285 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-23 00:49:24.716289 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-23 00:49:24.716293 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-23 00:49:24.716296 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-23 00:49:24.716300 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-23 00:49:24.716304 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-23 00:49:24.716307 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-23 00:49:24.716315 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-23 00:49:24.716318 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-23 00:49:24.716322 | orchestrator | 2026-03-23 00:49:24.716326 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-23 00:49:24.716330 | orchestrator | Monday 23 March 2026 00:49:08 +0000 (0:00:06.838) 0:00:46.338 ********** 2026-03-23 00:49:24.716333 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-23 00:49:24.716337 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:49:24.716341 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-23 00:49:24.716344 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:49:24.716348 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-23 00:49:24.716352 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:49:24.716355 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-23 00:49:24.716359 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-23 00:49:24.716362 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-23 00:49:24.716366 | orchestrator | 2026-03-23 00:49:24.716370 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-23 00:49:24.716373 | orchestrator | Monday 23 March 2026 00:49:11 +0000 (0:00:02.895) 0:00:49.235 ********** 2026-03-23 00:49:24.716377 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-23 00:49:24.716381 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:49:24.716384 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-23 00:49:24.716388 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:49:24.716392 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-23 00:49:24.716397 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:49:24.716401 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-23 00:49:24.716405 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-23 00:49:24.716408 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-23 00:49:24.716412 | orchestrator | 2026-03-23 00:49:24.716416 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-23 00:49:24.716419 | orchestrator | Monday 23 March 2026 00:49:15 +0000 (0:00:04.058) 0:00:53.293 ********** 2026-03-23 00:49:24.716423 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:49:24.716427 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:49:24.716430 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:49:24.716434 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:49:24.716437 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:49:24.716441 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:49:24.716445 | orchestrator | 2026-03-23 00:49:24.716449 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:49:24.716453 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-23 00:49:24.716457 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-23 00:49:24.716460 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-23 00:49:24.716464 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-23 00:49:24.716468 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-23 00:49:24.716474 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-23 00:49:24.716480 | orchestrator | 2026-03-23 00:49:24.716484 | orchestrator | 2026-03-23 00:49:24.716488 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:49:24.716491 | orchestrator | Monday 23 March 2026 00:49:23 +0000 (0:00:07.744) 0:01:01.038 ********** 2026-03-23 00:49:24.716495 | orchestrator | =============================================================================== 2026-03-23 00:49:24.716499 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.18s 2026-03-23 00:49:24.716502 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.55s 2026-03-23 00:49:24.716506 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.84s 2026-03-23 00:49:24.716510 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.06s 2026-03-23 00:49:24.716513 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.90s 2026-03-23 00:49:24.716517 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.60s 2026-03-23 00:49:24.716520 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.39s 2026-03-23 00:49:24.716524 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.12s 2026-03-23 00:49:24.716528 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.64s 2026-03-23 00:49:24.716531 | orchestrator | module-load : Load modules ---------------------------------------------- 1.61s 2026-03-23 00:49:24.716535 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.59s 2026-03-23 00:49:24.716538 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.29s 2026-03-23 00:49:24.716543 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.27s 2026-03-23 00:49:24.716547 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.14s 2026-03-23 00:49:24.716550 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.04s 2026-03-23 00:49:24.716554 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.91s 2026-03-23 00:49:24.716557 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2026-03-23 00:49:24.716561 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.62s 2026-03-23 00:49:24.716565 | orchestrator | 2026-03-23 00:49:24 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:24.716612 | orchestrator | 2026-03-23 00:49:24 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:24.717528 | orchestrator | 2026-03-23 00:49:24 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:49:24.721359 | orchestrator | 2026-03-23 00:49:24 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:24.721405 | orchestrator | 2026-03-23 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:27.756076 | orchestrator | 2026-03-23 00:49:27 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:27.756437 | orchestrator | 2026-03-23 00:49:27 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:27.757336 | orchestrator | 2026-03-23 00:49:27 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:27.761271 | orchestrator | 2026-03-23 00:49:27 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:49:27.763814 | orchestrator | 2026-03-23 00:49:27 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:27.763849 | orchestrator | 2026-03-23 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:30.785643 | orchestrator | 2026-03-23 00:49:30 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:30.788005 | orchestrator | 2026-03-23 00:49:30 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:30.788055 | orchestrator | 2026-03-23 00:49:30 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:30.788065 | orchestrator | 2026-03-23 00:49:30 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:49:30.788106 | orchestrator | 2026-03-23 00:49:30 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:30.788146 | orchestrator | 2026-03-23 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:33.811065 | orchestrator | 2026-03-23 00:49:33 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:33.811524 | orchestrator | 2026-03-23 00:49:33 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:33.812263 | orchestrator | 2026-03-23 00:49:33 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:33.812878 | orchestrator | 2026-03-23 00:49:33 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:49:33.813505 | orchestrator | 2026-03-23 00:49:33 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:33.813585 | orchestrator | 2026-03-23 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:36.843487 | orchestrator | 2026-03-23 00:49:36 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:36.843545 | orchestrator | 2026-03-23 00:49:36 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:36.843555 | orchestrator | 2026-03-23 00:49:36 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:36.843562 | orchestrator | 2026-03-23 00:49:36 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:49:36.843569 | orchestrator | 2026-03-23 00:49:36 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:36.843575 | orchestrator | 2026-03-23 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:39.874341 | orchestrator | 2026-03-23 00:49:39 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:39.874786 | orchestrator | 2026-03-23 00:49:39 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:39.875364 | orchestrator | 2026-03-23 00:49:39 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:39.876035 | orchestrator | 2026-03-23 00:49:39 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:49:39.876699 | orchestrator | 2026-03-23 00:49:39 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:39.876746 | orchestrator | 2026-03-23 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:42.911225 | orchestrator | 2026-03-23 00:49:42 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:42.912960 | orchestrator | 2026-03-23 00:49:42 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:42.917205 | orchestrator | 2026-03-23 00:49:42 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:42.918487 | orchestrator | 2026-03-23 00:49:42 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:49:42.919256 | orchestrator | 2026-03-23 00:49:42 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:42.919309 | orchestrator | 2026-03-23 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:45.943928 | orchestrator | 2026-03-23 00:49:45 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:45.945341 | orchestrator | 2026-03-23 00:49:45 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:45.945769 | orchestrator | 2026-03-23 00:49:45 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:45.947190 | orchestrator | 2026-03-23 00:49:45 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:49:45.947897 | orchestrator | 2026-03-23 00:49:45 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:45.947924 | orchestrator | 2026-03-23 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:48.999509 | orchestrator | 2026-03-23 00:49:48 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:48.999568 | orchestrator | 2026-03-23 00:49:48 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:48.999945 | orchestrator | 2026-03-23 00:49:48 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:49.000575 | orchestrator | 2026-03-23 00:49:49 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:49:49.001308 | orchestrator | 2026-03-23 00:49:49 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:49.001330 | orchestrator | 2026-03-23 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:52.039144 | orchestrator | 2026-03-23 00:49:52 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:52.041207 | orchestrator | 2026-03-23 00:49:52 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:52.042171 | orchestrator | 2026-03-23 00:49:52 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:52.043613 | orchestrator | 2026-03-23 00:49:52 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:49:52.045069 | orchestrator | 2026-03-23 00:49:52 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:52.045103 | orchestrator | 2026-03-23 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:55.085066 | orchestrator | 2026-03-23 00:49:55 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:55.085125 | orchestrator | 2026-03-23 00:49:55 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:55.085154 | orchestrator | 2026-03-23 00:49:55 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:55.085161 | orchestrator | 2026-03-23 00:49:55 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:49:55.085219 | orchestrator | 2026-03-23 00:49:55 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:55.085252 | orchestrator | 2026-03-23 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:49:58.109999 | orchestrator | 2026-03-23 00:49:58 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:49:58.110487 | orchestrator | 2026-03-23 00:49:58 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:49:58.111174 | orchestrator | 2026-03-23 00:49:58 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:49:58.112178 | orchestrator | 2026-03-23 00:49:58 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:49:58.112984 | orchestrator | 2026-03-23 00:49:58 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:49:58.113029 | orchestrator | 2026-03-23 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:01.147865 | orchestrator | 2026-03-23 00:50:01 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:50:01.151760 | orchestrator | 2026-03-23 00:50:01 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:01.152384 | orchestrator | 2026-03-23 00:50:01 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:01.153575 | orchestrator | 2026-03-23 00:50:01 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:01.154643 | orchestrator | 2026-03-23 00:50:01 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:01.154706 | orchestrator | 2026-03-23 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:04.226993 | orchestrator | 2026-03-23 00:50:04 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:50:04.230077 | orchestrator | 2026-03-23 00:50:04 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:04.230866 | orchestrator | 2026-03-23 00:50:04 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:04.231467 | orchestrator | 2026-03-23 00:50:04 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:04.231935 | orchestrator | 2026-03-23 00:50:04 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:04.231964 | orchestrator | 2026-03-23 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:07.278946 | orchestrator | 2026-03-23 00:50:07 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:50:07.281067 | orchestrator | 2026-03-23 00:50:07 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:07.282643 | orchestrator | 2026-03-23 00:50:07 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:07.286439 | orchestrator | 2026-03-23 00:50:07 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:07.288418 | orchestrator | 2026-03-23 00:50:07 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:07.288460 | orchestrator | 2026-03-23 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:10.337529 | orchestrator | 2026-03-23 00:50:10 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:50:10.339029 | orchestrator | 2026-03-23 00:50:10 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:10.339833 | orchestrator | 2026-03-23 00:50:10 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:10.341050 | orchestrator | 2026-03-23 00:50:10 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:10.342562 | orchestrator | 2026-03-23 00:50:10 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:10.342676 | orchestrator | 2026-03-23 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:13.429347 | orchestrator | 2026-03-23 00:50:13 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state STARTED 2026-03-23 00:50:13.429423 | orchestrator | 2026-03-23 00:50:13 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:13.429430 | orchestrator | 2026-03-23 00:50:13 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:13.429436 | orchestrator | 2026-03-23 00:50:13 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:13.429441 | orchestrator | 2026-03-23 00:50:13 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:13.429446 | orchestrator | 2026-03-23 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:16.655396 | orchestrator | 2026-03-23 00:50:16.655452 | orchestrator | 2026-03-23 00:50:16.655464 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-23 00:50:16.655473 | orchestrator | 2026-03-23 00:50:16.655482 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-23 00:50:16.655491 | orchestrator | Monday 23 March 2026 00:45:44 +0000 (0:00:00.202) 0:00:00.202 ********** 2026-03-23 00:50:16.655500 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:50:16.655509 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:50:16.655518 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:50:16.655527 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.655535 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.655544 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.655556 | orchestrator | 2026-03-23 00:50:16.655565 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-23 00:50:16.655575 | orchestrator | Monday 23 March 2026 00:45:45 +0000 (0:00:00.497) 0:00:00.700 ********** 2026-03-23 00:50:16.655584 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.655594 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.655602 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.655611 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.655620 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.655629 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.655636 | orchestrator | 2026-03-23 00:50:16.655641 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-23 00:50:16.655647 | orchestrator | Monday 23 March 2026 00:45:45 +0000 (0:00:00.609) 0:00:01.310 ********** 2026-03-23 00:50:16.655667 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.655674 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.655679 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.655684 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.655689 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.655694 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.655699 | orchestrator | 2026-03-23 00:50:16.655704 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-23 00:50:16.655719 | orchestrator | Monday 23 March 2026 00:45:46 +0000 (0:00:00.467) 0:00:01.777 ********** 2026-03-23 00:50:16.655724 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:50:16.655729 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:50:16.655734 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:50:16.655739 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.655744 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.655749 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.655754 | orchestrator | 2026-03-23 00:50:16.655759 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-23 00:50:16.655765 | orchestrator | Monday 23 March 2026 00:45:48 +0000 (0:00:02.361) 0:00:04.138 ********** 2026-03-23 00:50:16.655770 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:50:16.655775 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:50:16.655780 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.655785 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.655790 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.655795 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:50:16.655811 | orchestrator | 2026-03-23 00:50:16.655816 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-23 00:50:16.655821 | orchestrator | Monday 23 March 2026 00:45:50 +0000 (0:00:01.558) 0:00:05.697 ********** 2026-03-23 00:50:16.655826 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:50:16.655831 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:50:16.655836 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:50:16.655841 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.655846 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.655851 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.655857 | orchestrator | 2026-03-23 00:50:16.655862 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-23 00:50:16.655867 | orchestrator | Monday 23 March 2026 00:45:51 +0000 (0:00:01.663) 0:00:07.361 ********** 2026-03-23 00:50:16.655872 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.655877 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.655882 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.655887 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.655892 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.655897 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.655902 | orchestrator | 2026-03-23 00:50:16.655907 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-23 00:50:16.655912 | orchestrator | Monday 23 March 2026 00:45:52 +0000 (0:00:01.247) 0:00:08.609 ********** 2026-03-23 00:50:16.655917 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.655922 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.655927 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.655932 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.655937 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.655942 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.655947 | orchestrator | 2026-03-23 00:50:16.655953 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-23 00:50:16.655962 | orchestrator | Monday 23 March 2026 00:45:53 +0000 (0:00:01.022) 0:00:09.631 ********** 2026-03-23 00:50:16.655971 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-23 00:50:16.655980 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-23 00:50:16.655990 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.655999 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-23 00:50:16.656008 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-23 00:50:16.656018 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.656028 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-23 00:50:16.656036 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-23 00:50:16.656045 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-23 00:50:16.656055 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-23 00:50:16.656072 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.656078 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-23 00:50:16.656084 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-23 00:50:16.656090 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.656096 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.656102 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-23 00:50:16.656108 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-23 00:50:16.656113 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.656119 | orchestrator | 2026-03-23 00:50:16.656125 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-23 00:50:16.656136 | orchestrator | Monday 23 March 2026 00:45:54 +0000 (0:00:00.801) 0:00:10.434 ********** 2026-03-23 00:50:16.656141 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.656147 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.656153 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.656159 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.656165 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.656171 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.656177 | orchestrator | 2026-03-23 00:50:16.656183 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-23 00:50:16.656189 | orchestrator | Monday 23 March 2026 00:45:56 +0000 (0:00:01.377) 0:00:11.812 ********** 2026-03-23 00:50:16.656195 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:50:16.656200 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:50:16.656206 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:50:16.656212 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.656217 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.656223 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.656229 | orchestrator | 2026-03-23 00:50:16.656238 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-23 00:50:16.656244 | orchestrator | Monday 23 March 2026 00:45:56 +0000 (0:00:00.750) 0:00:12.563 ********** 2026-03-23 00:50:16.656250 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.656256 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.656262 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:50:16.656268 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:50:16.656274 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.656279 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:50:16.656285 | orchestrator | 2026-03-23 00:50:16.656291 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-23 00:50:16.656297 | orchestrator | Monday 23 March 2026 00:46:02 +0000 (0:00:05.567) 0:00:18.130 ********** 2026-03-23 00:50:16.656302 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.656308 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.656313 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.656318 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.656323 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.656328 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.656333 | orchestrator | 2026-03-23 00:50:16.656338 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-23 00:50:16.656343 | orchestrator | Monday 23 March 2026 00:46:03 +0000 (0:00:01.352) 0:00:19.483 ********** 2026-03-23 00:50:16.656348 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.656353 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.656358 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.656363 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.656368 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.656373 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.656378 | orchestrator | 2026-03-23 00:50:16.656383 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-23 00:50:16.656389 | orchestrator | Monday 23 March 2026 00:46:06 +0000 (0:00:02.552) 0:00:22.036 ********** 2026-03-23 00:50:16.656394 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.656399 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.656404 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.656409 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.656414 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.656419 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.656424 | orchestrator | 2026-03-23 00:50:16.656429 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-23 00:50:16.656434 | orchestrator | Monday 23 March 2026 00:46:08 +0000 (0:00:01.605) 0:00:23.641 ********** 2026-03-23 00:50:16.656442 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-23 00:50:16.656447 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-23 00:50:16.656453 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.656458 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-23 00:50:16.656463 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-23 00:50:16.656468 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.656472 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-23 00:50:16.656477 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-23 00:50:16.656482 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.656487 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-23 00:50:16.656492 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-23 00:50:16.656497 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.656502 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-23 00:50:16.656507 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-23 00:50:16.656512 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.656517 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-23 00:50:16.656522 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-23 00:50:16.656528 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.656534 | orchestrator | 2026-03-23 00:50:16.656543 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-23 00:50:16.656556 | orchestrator | Monday 23 March 2026 00:46:09 +0000 (0:00:01.071) 0:00:24.712 ********** 2026-03-23 00:50:16.656565 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.656574 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.656582 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.656591 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.656600 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.656609 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.656618 | orchestrator | 2026-03-23 00:50:16.656626 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-23 00:50:16.656636 | orchestrator | Monday 23 March 2026 00:46:10 +0000 (0:00:01.001) 0:00:25.714 ********** 2026-03-23 00:50:16.656644 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.656677 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.656683 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.656688 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.656693 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.656698 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.656703 | orchestrator | 2026-03-23 00:50:16.656708 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-23 00:50:16.656713 | orchestrator | 2026-03-23 00:50:16.656718 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-23 00:50:16.656723 | orchestrator | Monday 23 March 2026 00:46:11 +0000 (0:00:01.320) 0:00:27.034 ********** 2026-03-23 00:50:16.656728 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.656733 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.656738 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.656743 | orchestrator | 2026-03-23 00:50:16.656748 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-23 00:50:16.656753 | orchestrator | Monday 23 March 2026 00:46:12 +0000 (0:00:01.035) 0:00:28.069 ********** 2026-03-23 00:50:16.656758 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.656763 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.656768 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.656773 | orchestrator | 2026-03-23 00:50:16.656782 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-23 00:50:16.656787 | orchestrator | Monday 23 March 2026 00:46:13 +0000 (0:00:01.466) 0:00:29.536 ********** 2026-03-23 00:50:16.656792 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.656802 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.656807 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.656812 | orchestrator | 2026-03-23 00:50:16.656817 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-23 00:50:16.656822 | orchestrator | Monday 23 March 2026 00:46:15 +0000 (0:00:01.192) 0:00:30.728 ********** 2026-03-23 00:50:16.656827 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.656832 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.656837 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.656842 | orchestrator | 2026-03-23 00:50:16.656847 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-23 00:50:16.656852 | orchestrator | Monday 23 March 2026 00:46:16 +0000 (0:00:01.057) 0:00:31.786 ********** 2026-03-23 00:50:16.656857 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.656862 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.656867 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.656872 | orchestrator | 2026-03-23 00:50:16.656877 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-23 00:50:16.656882 | orchestrator | Monday 23 March 2026 00:46:16 +0000 (0:00:00.310) 0:00:32.096 ********** 2026-03-23 00:50:16.656887 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.656892 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.656897 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.656902 | orchestrator | 2026-03-23 00:50:16.656907 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-23 00:50:16.656912 | orchestrator | Monday 23 March 2026 00:46:17 +0000 (0:00:00.754) 0:00:32.850 ********** 2026-03-23 00:50:16.656917 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.656922 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.656927 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.656932 | orchestrator | 2026-03-23 00:50:16.656937 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-23 00:50:16.656942 | orchestrator | Monday 23 March 2026 00:46:19 +0000 (0:00:02.133) 0:00:34.984 ********** 2026-03-23 00:50:16.656947 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:50:16.656952 | orchestrator | 2026-03-23 00:50:16.656957 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-23 00:50:16.656963 | orchestrator | Monday 23 March 2026 00:46:20 +0000 (0:00:00.765) 0:00:35.749 ********** 2026-03-23 00:50:16.656968 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.656973 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.656978 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.656983 | orchestrator | 2026-03-23 00:50:16.656988 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-23 00:50:16.656993 | orchestrator | Monday 23 March 2026 00:46:21 +0000 (0:00:01.362) 0:00:37.112 ********** 2026-03-23 00:50:16.656998 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.657003 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.657008 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.657013 | orchestrator | 2026-03-23 00:50:16.657018 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-23 00:50:16.657023 | orchestrator | Monday 23 March 2026 00:46:22 +0000 (0:00:01.082) 0:00:38.194 ********** 2026-03-23 00:50:16.657028 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.657033 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.657038 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.657043 | orchestrator | 2026-03-23 00:50:16.657048 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-23 00:50:16.657053 | orchestrator | Monday 23 March 2026 00:46:23 +0000 (0:00:01.222) 0:00:39.417 ********** 2026-03-23 00:50:16.657058 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.657063 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.657068 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.657077 | orchestrator | 2026-03-23 00:50:16.657082 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-23 00:50:16.657094 | orchestrator | Monday 23 March 2026 00:46:25 +0000 (0:00:01.477) 0:00:40.895 ********** 2026-03-23 00:50:16.657103 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.657111 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.657120 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.657128 | orchestrator | 2026-03-23 00:50:16.657137 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-23 00:50:16.657147 | orchestrator | Monday 23 March 2026 00:46:25 +0000 (0:00:00.612) 0:00:41.507 ********** 2026-03-23 00:50:16.657156 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.657164 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.657173 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.657180 | orchestrator | 2026-03-23 00:50:16.657186 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-23 00:50:16.657191 | orchestrator | Monday 23 March 2026 00:46:26 +0000 (0:00:00.446) 0:00:41.954 ********** 2026-03-23 00:50:16.657196 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.657201 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.657206 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.657211 | orchestrator | 2026-03-23 00:50:16.657216 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-23 00:50:16.657221 | orchestrator | Monday 23 March 2026 00:46:28 +0000 (0:00:02.674) 0:00:44.629 ********** 2026-03-23 00:50:16.657226 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.657231 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.657236 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.657241 | orchestrator | 2026-03-23 00:50:16.657246 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-23 00:50:16.657251 | orchestrator | Monday 23 March 2026 00:46:30 +0000 (0:00:01.967) 0:00:46.596 ********** 2026-03-23 00:50:16.657256 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.657261 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.657266 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.657271 | orchestrator | 2026-03-23 00:50:16.657279 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-23 00:50:16.657284 | orchestrator | Monday 23 March 2026 00:46:31 +0000 (0:00:00.445) 0:00:47.042 ********** 2026-03-23 00:50:16.657290 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-23 00:50:16.657295 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-23 00:50:16.657300 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-23 00:50:16.657305 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-23 00:50:16.657310 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-23 00:50:16.657315 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-23 00:50:16.657320 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-23 00:50:16.657327 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-23 00:50:16.657336 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-23 00:50:16.657342 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-23 00:50:16.657354 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-23 00:50:16.657362 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-23 00:50:16.657374 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-23 00:50:16.657384 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-23 00:50:16.657392 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-23 00:50:16.657400 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.657408 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.657417 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.657425 | orchestrator | 2026-03-23 00:50:16.657434 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-23 00:50:16.657443 | orchestrator | Monday 23 March 2026 00:47:25 +0000 (0:00:53.657) 0:01:40.700 ********** 2026-03-23 00:50:16.657452 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.657460 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.657466 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.657471 | orchestrator | 2026-03-23 00:50:16.657476 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-23 00:50:16.657486 | orchestrator | Monday 23 March 2026 00:47:25 +0000 (0:00:00.730) 0:01:41.431 ********** 2026-03-23 00:50:16.657491 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.657496 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.657501 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.657514 | orchestrator | 2026-03-23 00:50:16.657519 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-23 00:50:16.657525 | orchestrator | Monday 23 March 2026 00:47:26 +0000 (0:00:01.114) 0:01:42.545 ********** 2026-03-23 00:50:16.657530 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.657535 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.657540 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.657545 | orchestrator | 2026-03-23 00:50:16.657550 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-23 00:50:16.657555 | orchestrator | Monday 23 March 2026 00:47:28 +0000 (0:00:01.254) 0:01:43.799 ********** 2026-03-23 00:50:16.657560 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.657565 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.657570 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.657575 | orchestrator | 2026-03-23 00:50:16.657580 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-23 00:50:16.657585 | orchestrator | Monday 23 March 2026 00:47:54 +0000 (0:00:26.063) 0:02:09.863 ********** 2026-03-23 00:50:16.657590 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.657595 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.657601 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.657606 | orchestrator | 2026-03-23 00:50:16.657611 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-23 00:50:16.657616 | orchestrator | Monday 23 March 2026 00:47:55 +0000 (0:00:00.902) 0:02:10.765 ********** 2026-03-23 00:50:16.657621 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.657626 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.657631 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.657636 | orchestrator | 2026-03-23 00:50:16.657641 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-23 00:50:16.657650 | orchestrator | Monday 23 March 2026 00:47:56 +0000 (0:00:01.030) 0:02:11.796 ********** 2026-03-23 00:50:16.657689 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.657703 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.657711 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.657719 | orchestrator | 2026-03-23 00:50:16.657728 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-23 00:50:16.657736 | orchestrator | Monday 23 March 2026 00:47:56 +0000 (0:00:00.696) 0:02:12.493 ********** 2026-03-23 00:50:16.657744 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.657753 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.657762 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.657771 | orchestrator | 2026-03-23 00:50:16.657780 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-23 00:50:16.657788 | orchestrator | Monday 23 March 2026 00:47:57 +0000 (0:00:00.704) 0:02:13.197 ********** 2026-03-23 00:50:16.657797 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.657806 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.657814 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.657822 | orchestrator | 2026-03-23 00:50:16.657827 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-23 00:50:16.657832 | orchestrator | Monday 23 March 2026 00:47:58 +0000 (0:00:00.446) 0:02:13.644 ********** 2026-03-23 00:50:16.657837 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.657842 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.657847 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.657852 | orchestrator | 2026-03-23 00:50:16.657857 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-23 00:50:16.657862 | orchestrator | Monday 23 March 2026 00:47:59 +0000 (0:00:01.100) 0:02:14.745 ********** 2026-03-23 00:50:16.657868 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.657873 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.657878 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.657883 | orchestrator | 2026-03-23 00:50:16.657888 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-23 00:50:16.657893 | orchestrator | Monday 23 March 2026 00:47:59 +0000 (0:00:00.721) 0:02:15.467 ********** 2026-03-23 00:50:16.657898 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.657903 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.657908 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.657913 | orchestrator | 2026-03-23 00:50:16.657918 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-23 00:50:16.657923 | orchestrator | Monday 23 March 2026 00:48:00 +0000 (0:00:00.882) 0:02:16.349 ********** 2026-03-23 00:50:16.657928 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:16.657933 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:16.657938 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:16.657943 | orchestrator | 2026-03-23 00:50:16.657949 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-23 00:50:16.657957 | orchestrator | Monday 23 March 2026 00:48:01 +0000 (0:00:00.709) 0:02:17.059 ********** 2026-03-23 00:50:16.657971 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.657979 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.657987 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.657995 | orchestrator | 2026-03-23 00:50:16.658003 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-23 00:50:16.658053 | orchestrator | Monday 23 March 2026 00:48:01 +0000 (0:00:00.471) 0:02:17.531 ********** 2026-03-23 00:50:16.658061 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.658066 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.658071 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.658076 | orchestrator | 2026-03-23 00:50:16.658081 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-23 00:50:16.658086 | orchestrator | Monday 23 March 2026 00:48:02 +0000 (0:00:00.287) 0:02:17.818 ********** 2026-03-23 00:50:16.658109 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.658115 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.658129 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.658137 | orchestrator | 2026-03-23 00:50:16.658146 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-23 00:50:16.658155 | orchestrator | Monday 23 March 2026 00:48:02 +0000 (0:00:00.744) 0:02:18.563 ********** 2026-03-23 00:50:16.658163 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.658179 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.658189 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.658197 | orchestrator | 2026-03-23 00:50:16.658206 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-23 00:50:16.658216 | orchestrator | Monday 23 March 2026 00:48:03 +0000 (0:00:00.654) 0:02:19.217 ********** 2026-03-23 00:50:16.658225 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-23 00:50:16.658234 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-23 00:50:16.658243 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-23 00:50:16.658251 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-23 00:50:16.658259 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-23 00:50:16.658268 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-23 00:50:16.658276 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-23 00:50:16.658284 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-23 00:50:16.658293 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-23 00:50:16.658302 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-23 00:50:16.658315 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-23 00:50:16.658324 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-23 00:50:16.658333 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-23 00:50:16.658341 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-23 00:50:16.658349 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-23 00:50:16.658358 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-23 00:50:16.658366 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-23 00:50:16.658375 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-23 00:50:16.658384 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-23 00:50:16.658393 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-23 00:50:16.658401 | orchestrator | 2026-03-23 00:50:16.658410 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-23 00:50:16.658419 | orchestrator | 2026-03-23 00:50:16.658428 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-23 00:50:16.658437 | orchestrator | Monday 23 March 2026 00:48:06 +0000 (0:00:03.254) 0:02:22.471 ********** 2026-03-23 00:50:16.658446 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:50:16.658454 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:50:16.658462 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:50:16.658471 | orchestrator | 2026-03-23 00:50:16.658479 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-23 00:50:16.658493 | orchestrator | Monday 23 March 2026 00:48:07 +0000 (0:00:00.317) 0:02:22.788 ********** 2026-03-23 00:50:16.658502 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:50:16.658511 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:50:16.658519 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:50:16.658528 | orchestrator | 2026-03-23 00:50:16.658537 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-23 00:50:16.658546 | orchestrator | Monday 23 March 2026 00:48:07 +0000 (0:00:00.608) 0:02:23.397 ********** 2026-03-23 00:50:16.658555 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:50:16.658563 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:50:16.658572 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:50:16.658580 | orchestrator | 2026-03-23 00:50:16.658589 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-23 00:50:16.658598 | orchestrator | Monday 23 March 2026 00:48:08 +0000 (0:00:00.465) 0:02:23.863 ********** 2026-03-23 00:50:16.658607 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:50:16.658616 | orchestrator | 2026-03-23 00:50:16.658624 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-23 00:50:16.658633 | orchestrator | Monday 23 March 2026 00:48:08 +0000 (0:00:00.516) 0:02:24.379 ********** 2026-03-23 00:50:16.658642 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.658687 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.658699 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.658708 | orchestrator | 2026-03-23 00:50:16.658716 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-23 00:50:16.658725 | orchestrator | Monday 23 March 2026 00:48:09 +0000 (0:00:00.278) 0:02:24.657 ********** 2026-03-23 00:50:16.658734 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.658743 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.658752 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.658760 | orchestrator | 2026-03-23 00:50:16.658769 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-23 00:50:16.658783 | orchestrator | Monday 23 March 2026 00:48:09 +0000 (0:00:00.383) 0:02:25.041 ********** 2026-03-23 00:50:16.658792 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.658801 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.658810 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.658819 | orchestrator | 2026-03-23 00:50:16.658828 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-23 00:50:16.658836 | orchestrator | Monday 23 March 2026 00:48:09 +0000 (0:00:00.323) 0:02:25.364 ********** 2026-03-23 00:50:16.658845 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:50:16.658853 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:50:16.658862 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:50:16.658871 | orchestrator | 2026-03-23 00:50:16.658880 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-23 00:50:16.658889 | orchestrator | Monday 23 March 2026 00:48:10 +0000 (0:00:00.751) 0:02:26.116 ********** 2026-03-23 00:50:16.658897 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:50:16.658906 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:50:16.658914 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:50:16.658923 | orchestrator | 2026-03-23 00:50:16.658931 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-23 00:50:16.658940 | orchestrator | Monday 23 March 2026 00:48:11 +0000 (0:00:01.054) 0:02:27.170 ********** 2026-03-23 00:50:16.658949 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:50:16.658958 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:50:16.658966 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:50:16.658975 | orchestrator | 2026-03-23 00:50:16.658984 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-23 00:50:16.658992 | orchestrator | Monday 23 March 2026 00:48:13 +0000 (0:00:01.552) 0:02:28.723 ********** 2026-03-23 00:50:16.659007 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:50:16.659016 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:50:16.659024 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:50:16.659033 | orchestrator | 2026-03-23 00:50:16.659046 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-23 00:50:16.659055 | orchestrator | 2026-03-23 00:50:16.659064 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-23 00:50:16.659072 | orchestrator | Monday 23 March 2026 00:48:22 +0000 (0:00:09.662) 0:02:38.385 ********** 2026-03-23 00:50:16.659081 | orchestrator | ok: [testbed-manager] 2026-03-23 00:50:16.659090 | orchestrator | 2026-03-23 00:50:16.659098 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-23 00:50:16.659107 | orchestrator | Monday 23 March 2026 00:48:23 +0000 (0:00:00.856) 0:02:39.241 ********** 2026-03-23 00:50:16.659116 | orchestrator | changed: [testbed-manager] 2026-03-23 00:50:16.659124 | orchestrator | 2026-03-23 00:50:16.659133 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-23 00:50:16.659141 | orchestrator | Monday 23 March 2026 00:48:24 +0000 (0:00:00.602) 0:02:39.844 ********** 2026-03-23 00:50:16.659150 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-23 00:50:16.659158 | orchestrator | 2026-03-23 00:50:16.659167 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-23 00:50:16.659176 | orchestrator | Monday 23 March 2026 00:48:24 +0000 (0:00:00.490) 0:02:40.334 ********** 2026-03-23 00:50:16.659184 | orchestrator | changed: [testbed-manager] 2026-03-23 00:50:16.659193 | orchestrator | 2026-03-23 00:50:16.659201 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-23 00:50:16.659210 | orchestrator | Monday 23 March 2026 00:48:25 +0000 (0:00:01.019) 0:02:41.354 ********** 2026-03-23 00:50:16.659218 | orchestrator | changed: [testbed-manager] 2026-03-23 00:50:16.659227 | orchestrator | 2026-03-23 00:50:16.659235 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-23 00:50:16.659244 | orchestrator | Monday 23 March 2026 00:48:26 +0000 (0:00:00.544) 0:02:41.899 ********** 2026-03-23 00:50:16.659252 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-23 00:50:16.659261 | orchestrator | 2026-03-23 00:50:16.659270 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-23 00:50:16.659279 | orchestrator | Monday 23 March 2026 00:48:27 +0000 (0:00:01.437) 0:02:43.336 ********** 2026-03-23 00:50:16.659288 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-23 00:50:16.659296 | orchestrator | 2026-03-23 00:50:16.659305 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-23 00:50:16.659314 | orchestrator | Monday 23 March 2026 00:48:28 +0000 (0:00:00.843) 0:02:44.180 ********** 2026-03-23 00:50:16.659322 | orchestrator | changed: [testbed-manager] 2026-03-23 00:50:16.659329 | orchestrator | 2026-03-23 00:50:16.659337 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-23 00:50:16.659345 | orchestrator | Monday 23 March 2026 00:48:28 +0000 (0:00:00.375) 0:02:44.555 ********** 2026-03-23 00:50:16.659353 | orchestrator | changed: [testbed-manager] 2026-03-23 00:50:16.659362 | orchestrator | 2026-03-23 00:50:16.659370 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-23 00:50:16.659378 | orchestrator | 2026-03-23 00:50:16.659386 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-23 00:50:16.659394 | orchestrator | Monday 23 March 2026 00:48:29 +0000 (0:00:00.381) 0:02:44.937 ********** 2026-03-23 00:50:16.659402 | orchestrator | ok: [testbed-manager] 2026-03-23 00:50:16.659411 | orchestrator | 2026-03-23 00:50:16.659421 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-23 00:50:16.659429 | orchestrator | Monday 23 March 2026 00:48:29 +0000 (0:00:00.132) 0:02:45.069 ********** 2026-03-23 00:50:16.659438 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-23 00:50:16.659446 | orchestrator | 2026-03-23 00:50:16.659460 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-23 00:50:16.659468 | orchestrator | Monday 23 March 2026 00:48:29 +0000 (0:00:00.203) 0:02:45.273 ********** 2026-03-23 00:50:16.659477 | orchestrator | ok: [testbed-manager] 2026-03-23 00:50:16.659486 | orchestrator | 2026-03-23 00:50:16.659495 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-23 00:50:16.659503 | orchestrator | Monday 23 March 2026 00:48:30 +0000 (0:00:00.938) 0:02:46.211 ********** 2026-03-23 00:50:16.659516 | orchestrator | ok: [testbed-manager] 2026-03-23 00:50:16.659525 | orchestrator | 2026-03-23 00:50:16.659532 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-23 00:50:16.659541 | orchestrator | Monday 23 March 2026 00:48:32 +0000 (0:00:01.450) 0:02:47.662 ********** 2026-03-23 00:50:16.659548 | orchestrator | changed: [testbed-manager] 2026-03-23 00:50:16.659557 | orchestrator | 2026-03-23 00:50:16.659565 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-23 00:50:16.659573 | orchestrator | Monday 23 March 2026 00:48:32 +0000 (0:00:00.818) 0:02:48.481 ********** 2026-03-23 00:50:16.659582 | orchestrator | ok: [testbed-manager] 2026-03-23 00:50:16.659591 | orchestrator | 2026-03-23 00:50:16.659596 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-23 00:50:16.659600 | orchestrator | Monday 23 March 2026 00:48:33 +0000 (0:00:00.547) 0:02:49.029 ********** 2026-03-23 00:50:16.659605 | orchestrator | changed: [testbed-manager] 2026-03-23 00:50:16.659610 | orchestrator | 2026-03-23 00:50:16.659615 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-23 00:50:16.659621 | orchestrator | Monday 23 March 2026 00:48:40 +0000 (0:00:07.328) 0:02:56.357 ********** 2026-03-23 00:50:16.659629 | orchestrator | changed: [testbed-manager] 2026-03-23 00:50:16.659637 | orchestrator | 2026-03-23 00:50:16.659644 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-23 00:50:16.659665 | orchestrator | Monday 23 March 2026 00:48:55 +0000 (0:00:14.956) 0:03:11.313 ********** 2026-03-23 00:50:16.659674 | orchestrator | ok: [testbed-manager] 2026-03-23 00:50:16.659683 | orchestrator | 2026-03-23 00:50:16.659691 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-23 00:50:16.659699 | orchestrator | 2026-03-23 00:50:16.659707 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-23 00:50:16.659715 | orchestrator | Monday 23 March 2026 00:48:56 +0000 (0:00:00.499) 0:03:11.812 ********** 2026-03-23 00:50:16.659724 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.659737 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.659742 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.659747 | orchestrator | 2026-03-23 00:50:16.659753 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-23 00:50:16.659761 | orchestrator | Monday 23 March 2026 00:48:56 +0000 (0:00:00.504) 0:03:12.317 ********** 2026-03-23 00:50:16.659770 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.659781 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.659788 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.659797 | orchestrator | 2026-03-23 00:50:16.659804 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-23 00:50:16.659811 | orchestrator | Monday 23 March 2026 00:48:56 +0000 (0:00:00.221) 0:03:12.538 ********** 2026-03-23 00:50:16.659819 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:50:16.659827 | orchestrator | 2026-03-23 00:50:16.659836 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-23 00:50:16.659844 | orchestrator | Monday 23 March 2026 00:48:57 +0000 (0:00:00.450) 0:03:12.989 ********** 2026-03-23 00:50:16.659852 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-23 00:50:16.659860 | orchestrator | 2026-03-23 00:50:16.659865 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-23 00:50:16.659870 | orchestrator | Monday 23 March 2026 00:48:58 +0000 (0:00:00.706) 0:03:13.695 ********** 2026-03-23 00:50:16.659880 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 00:50:16.659885 | orchestrator | 2026-03-23 00:50:16.659890 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-23 00:50:16.659895 | orchestrator | Monday 23 March 2026 00:48:58 +0000 (0:00:00.876) 0:03:14.572 ********** 2026-03-23 00:50:16.659899 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.659904 | orchestrator | 2026-03-23 00:50:16.659909 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-23 00:50:16.659914 | orchestrator | Monday 23 March 2026 00:48:59 +0000 (0:00:00.156) 0:03:14.728 ********** 2026-03-23 00:50:16.659919 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 00:50:16.659923 | orchestrator | 2026-03-23 00:50:16.659928 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-23 00:50:16.659933 | orchestrator | Monday 23 March 2026 00:49:00 +0000 (0:00:01.016) 0:03:15.745 ********** 2026-03-23 00:50:16.659938 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.659942 | orchestrator | 2026-03-23 00:50:16.659947 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-23 00:50:16.659952 | orchestrator | Monday 23 March 2026 00:49:00 +0000 (0:00:00.154) 0:03:15.899 ********** 2026-03-23 00:50:16.659957 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.659961 | orchestrator | 2026-03-23 00:50:16.659966 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-23 00:50:16.659971 | orchestrator | Monday 23 March 2026 00:49:00 +0000 (0:00:00.112) 0:03:16.011 ********** 2026-03-23 00:50:16.659976 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.659980 | orchestrator | 2026-03-23 00:50:16.659985 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-23 00:50:16.659990 | orchestrator | Monday 23 March 2026 00:49:00 +0000 (0:00:00.152) 0:03:16.164 ********** 2026-03-23 00:50:16.659995 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.660000 | orchestrator | 2026-03-23 00:50:16.660005 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-23 00:50:16.660009 | orchestrator | Monday 23 March 2026 00:49:00 +0000 (0:00:00.101) 0:03:16.265 ********** 2026-03-23 00:50:16.660014 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-23 00:50:16.660021 | orchestrator | 2026-03-23 00:50:16.660032 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-23 00:50:16.660042 | orchestrator | Monday 23 March 2026 00:49:05 +0000 (0:00:04.649) 0:03:20.915 ********** 2026-03-23 00:50:16.660049 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-23 00:50:16.660064 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-23 00:50:16.660072 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-23 00:50:16.660080 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-23 00:50:16.660089 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-23 00:50:16.660097 | orchestrator | 2026-03-23 00:50:16.660104 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-23 00:50:16.660111 | orchestrator | Monday 23 March 2026 00:49:47 +0000 (0:00:42.239) 0:04:03.154 ********** 2026-03-23 00:50:16.660118 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 00:50:16.660127 | orchestrator | 2026-03-23 00:50:16.660132 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-23 00:50:16.660137 | orchestrator | Monday 23 March 2026 00:49:48 +0000 (0:00:01.359) 0:04:04.514 ********** 2026-03-23 00:50:16.660141 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-23 00:50:16.660146 | orchestrator | 2026-03-23 00:50:16.660151 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-23 00:50:16.660156 | orchestrator | Monday 23 March 2026 00:49:50 +0000 (0:00:01.639) 0:04:06.153 ********** 2026-03-23 00:50:16.660165 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-23 00:50:16.660170 | orchestrator | 2026-03-23 00:50:16.660175 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-23 00:50:16.660180 | orchestrator | Monday 23 March 2026 00:49:51 +0000 (0:00:00.985) 0:04:07.138 ********** 2026-03-23 00:50:16.660185 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.660189 | orchestrator | 2026-03-23 00:50:16.660194 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-23 00:50:16.660199 | orchestrator | Monday 23 March 2026 00:49:51 +0000 (0:00:00.119) 0:04:07.258 ********** 2026-03-23 00:50:16.660207 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-23 00:50:16.660212 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-23 00:50:16.660217 | orchestrator | 2026-03-23 00:50:16.660222 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-23 00:50:16.660227 | orchestrator | Monday 23 March 2026 00:49:54 +0000 (0:00:02.811) 0:04:10.070 ********** 2026-03-23 00:50:16.660232 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.660237 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.660241 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.660246 | orchestrator | 2026-03-23 00:50:16.660251 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-23 00:50:16.660256 | orchestrator | Monday 23 March 2026 00:49:54 +0000 (0:00:00.459) 0:04:10.529 ********** 2026-03-23 00:50:16.660260 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.660265 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.660270 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.660275 | orchestrator | 2026-03-23 00:50:16.660279 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-23 00:50:16.660284 | orchestrator | 2026-03-23 00:50:16.660289 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-23 00:50:16.660293 | orchestrator | Monday 23 March 2026 00:49:55 +0000 (0:00:00.799) 0:04:11.329 ********** 2026-03-23 00:50:16.660298 | orchestrator | ok: [testbed-manager] 2026-03-23 00:50:16.660303 | orchestrator | 2026-03-23 00:50:16.660308 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-23 00:50:16.660313 | orchestrator | Monday 23 March 2026 00:49:55 +0000 (0:00:00.133) 0:04:11.462 ********** 2026-03-23 00:50:16.660317 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-23 00:50:16.660322 | orchestrator | 2026-03-23 00:50:16.660327 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-23 00:50:16.660332 | orchestrator | Monday 23 March 2026 00:49:56 +0000 (0:00:00.303) 0:04:11.765 ********** 2026-03-23 00:50:16.660337 | orchestrator | changed: [testbed-manager] 2026-03-23 00:50:16.660345 | orchestrator | 2026-03-23 00:50:16.660353 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-23 00:50:16.660360 | orchestrator | 2026-03-23 00:50:16.660365 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-23 00:50:16.660369 | orchestrator | Monday 23 March 2026 00:50:01 +0000 (0:00:05.043) 0:04:16.809 ********** 2026-03-23 00:50:16.660374 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:50:16.660379 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:50:16.660384 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:50:16.660389 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:16.660393 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:16.660398 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:16.660403 | orchestrator | 2026-03-23 00:50:16.660407 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-23 00:50:16.660412 | orchestrator | Monday 23 March 2026 00:50:01 +0000 (0:00:00.614) 0:04:17.423 ********** 2026-03-23 00:50:16.660417 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-23 00:50:16.660427 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-23 00:50:16.660432 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-23 00:50:16.660437 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-23 00:50:16.660442 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-23 00:50:16.660446 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-23 00:50:16.660451 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-23 00:50:16.660456 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-23 00:50:16.660465 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-23 00:50:16.660469 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-23 00:50:16.660474 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-23 00:50:16.660479 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-23 00:50:16.660484 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-23 00:50:16.660489 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-23 00:50:16.660493 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-23 00:50:16.660498 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-23 00:50:16.660503 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-23 00:50:16.660510 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-23 00:50:16.660519 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-23 00:50:16.660524 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-23 00:50:16.660528 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-23 00:50:16.660533 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-23 00:50:16.660544 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-23 00:50:16.660553 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-23 00:50:16.660561 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-23 00:50:16.660569 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-23 00:50:16.660577 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-23 00:50:16.660585 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-23 00:50:16.660593 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-23 00:50:16.660602 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-23 00:50:16.660610 | orchestrator | 2026-03-23 00:50:16.660618 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-23 00:50:16.660626 | orchestrator | Monday 23 March 2026 00:50:15 +0000 (0:00:13.389) 0:04:30.813 ********** 2026-03-23 00:50:16.660634 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.660642 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.660650 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.660694 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.660702 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.660710 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.660724 | orchestrator | 2026-03-23 00:50:16.660732 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-23 00:50:16.660740 | orchestrator | Monday 23 March 2026 00:50:15 +0000 (0:00:00.513) 0:04:31.327 ********** 2026-03-23 00:50:16.660748 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:50:16.660756 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:50:16.660764 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:50:16.660771 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:16.660779 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:16.660787 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:16.660795 | orchestrator | 2026-03-23 00:50:16.660803 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:50:16.660812 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:50:16.660821 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-23 00:50:16.660829 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-23 00:50:16.660837 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-23 00:50:16.660846 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-23 00:50:16.660854 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-23 00:50:16.660862 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-23 00:50:16.660870 | orchestrator | 2026-03-23 00:50:16.660878 | orchestrator | 2026-03-23 00:50:16.660886 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:50:16.660900 | orchestrator | Monday 23 March 2026 00:50:16 +0000 (0:00:00.476) 0:04:31.803 ********** 2026-03-23 00:50:16.660909 | orchestrator | =============================================================================== 2026-03-23 00:50:16.660917 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.66s 2026-03-23 00:50:16.660925 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.24s 2026-03-23 00:50:16.660933 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.06s 2026-03-23 00:50:16.660941 | orchestrator | kubectl : Install required packages ------------------------------------ 14.96s 2026-03-23 00:50:16.660949 | orchestrator | Manage labels ---------------------------------------------------------- 13.39s 2026-03-23 00:50:16.660957 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.66s 2026-03-23 00:50:16.660965 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.33s 2026-03-23 00:50:16.660973 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.57s 2026-03-23 00:50:16.660981 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.04s 2026-03-23 00:50:16.660989 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.65s 2026-03-23 00:50:16.660997 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.25s 2026-03-23 00:50:16.661005 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.81s 2026-03-23 00:50:16.661013 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.67s 2026-03-23 00:50:16.661021 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.55s 2026-03-23 00:50:16.661035 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.36s 2026-03-23 00:50:16.661043 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.13s 2026-03-23 00:50:16.661051 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 1.97s 2026-03-23 00:50:16.661059 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.66s 2026-03-23 00:50:16.661067 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.64s 2026-03-23 00:50:16.661075 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 1.61s 2026-03-23 00:50:16.661524 | orchestrator | 2026-03-23 00:50:16 | INFO  | Task d8c6fd32-9743-4cb4-ada5-c9a6c634576b is in state SUCCESS 2026-03-23 00:50:16.661546 | orchestrator | 2026-03-23 00:50:16 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:16.661554 | orchestrator | 2026-03-23 00:50:16 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:16.661562 | orchestrator | 2026-03-23 00:50:16 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:16.661569 | orchestrator | 2026-03-23 00:50:16 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:16.661577 | orchestrator | 2026-03-23 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:19.698540 | orchestrator | 2026-03-23 00:50:19 | INFO  | Task bb2a9bb9-52fb-4626-8344-035d3dddbe6d is in state STARTED 2026-03-23 00:50:19.698610 | orchestrator | 2026-03-23 00:50:19 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:19.698632 | orchestrator | 2026-03-23 00:50:19 | INFO  | Task 59a22ab4-49cc-4287-b1b8-4d8410287024 is in state STARTED 2026-03-23 00:50:19.698640 | orchestrator | 2026-03-23 00:50:19 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:19.698684 | orchestrator | 2026-03-23 00:50:19 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:19.698697 | orchestrator | 2026-03-23 00:50:19 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:19.698708 | orchestrator | 2026-03-23 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:22.719582 | orchestrator | 2026-03-23 00:50:22 | INFO  | Task bb2a9bb9-52fb-4626-8344-035d3dddbe6d is in state STARTED 2026-03-23 00:50:22.719832 | orchestrator | 2026-03-23 00:50:22 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:22.720508 | orchestrator | 2026-03-23 00:50:22 | INFO  | Task 59a22ab4-49cc-4287-b1b8-4d8410287024 is in state STARTED 2026-03-23 00:50:22.720813 | orchestrator | 2026-03-23 00:50:22 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:22.722776 | orchestrator | 2026-03-23 00:50:22 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:22.729153 | orchestrator | 2026-03-23 00:50:22 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:22.729227 | orchestrator | 2026-03-23 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:25.749073 | orchestrator | 2026-03-23 00:50:25 | INFO  | Task bb2a9bb9-52fb-4626-8344-035d3dddbe6d is in state SUCCESS 2026-03-23 00:50:25.749140 | orchestrator | 2026-03-23 00:50:25 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:25.749831 | orchestrator | 2026-03-23 00:50:25 | INFO  | Task 59a22ab4-49cc-4287-b1b8-4d8410287024 is in state STARTED 2026-03-23 00:50:25.750282 | orchestrator | 2026-03-23 00:50:25 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:25.750983 | orchestrator | 2026-03-23 00:50:25 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:25.751990 | orchestrator | 2026-03-23 00:50:25 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:25.752035 | orchestrator | 2026-03-23 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:28.785633 | orchestrator | 2026-03-23 00:50:28 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:28.785775 | orchestrator | 2026-03-23 00:50:28 | INFO  | Task 59a22ab4-49cc-4287-b1b8-4d8410287024 is in state SUCCESS 2026-03-23 00:50:28.785790 | orchestrator | 2026-03-23 00:50:28 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:28.786592 | orchestrator | 2026-03-23 00:50:28 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:28.787471 | orchestrator | 2026-03-23 00:50:28 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:28.787516 | orchestrator | 2026-03-23 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:31.819305 | orchestrator | 2026-03-23 00:50:31 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:31.819594 | orchestrator | 2026-03-23 00:50:31 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:31.820405 | orchestrator | 2026-03-23 00:50:31 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:31.821032 | orchestrator | 2026-03-23 00:50:31 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:31.821088 | orchestrator | 2026-03-23 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:34.844260 | orchestrator | 2026-03-23 00:50:34 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:34.846748 | orchestrator | 2026-03-23 00:50:34 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:34.848478 | orchestrator | 2026-03-23 00:50:34 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:34.850430 | orchestrator | 2026-03-23 00:50:34 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:34.850479 | orchestrator | 2026-03-23 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:37.888154 | orchestrator | 2026-03-23 00:50:37 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:37.891540 | orchestrator | 2026-03-23 00:50:37 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:37.895485 | orchestrator | 2026-03-23 00:50:37 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:37.898661 | orchestrator | 2026-03-23 00:50:37 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:37.900876 | orchestrator | 2026-03-23 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:40.931563 | orchestrator | 2026-03-23 00:50:40 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:40.932468 | orchestrator | 2026-03-23 00:50:40 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:40.934794 | orchestrator | 2026-03-23 00:50:40 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:40.936857 | orchestrator | 2026-03-23 00:50:40 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:40.937331 | orchestrator | 2026-03-23 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:43.969176 | orchestrator | 2026-03-23 00:50:43 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:43.969330 | orchestrator | 2026-03-23 00:50:43 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:43.970209 | orchestrator | 2026-03-23 00:50:43 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:43.971018 | orchestrator | 2026-03-23 00:50:43 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:43.971063 | orchestrator | 2026-03-23 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:47.011062 | orchestrator | 2026-03-23 00:50:47 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:47.012281 | orchestrator | 2026-03-23 00:50:47 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:47.013828 | orchestrator | 2026-03-23 00:50:47 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:47.015407 | orchestrator | 2026-03-23 00:50:47 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:47.015675 | orchestrator | 2026-03-23 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:50.044668 | orchestrator | 2026-03-23 00:50:50 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:50.048143 | orchestrator | 2026-03-23 00:50:50 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state STARTED 2026-03-23 00:50:50.050007 | orchestrator | 2026-03-23 00:50:50 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:50.050236 | orchestrator | 2026-03-23 00:50:50 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:50.050257 | orchestrator | 2026-03-23 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:53.086260 | orchestrator | 2026-03-23 00:50:53 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:53.088906 | orchestrator | 2026-03-23 00:50:53.089015 | orchestrator | 2026-03-23 00:50:53.089026 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-23 00:50:53.089032 | orchestrator | 2026-03-23 00:50:53.089036 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-23 00:50:53.089041 | orchestrator | Monday 23 March 2026 00:50:20 +0000 (0:00:00.284) 0:00:00.284 ********** 2026-03-23 00:50:53.089046 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-23 00:50:53.089050 | orchestrator | 2026-03-23 00:50:53.089054 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-23 00:50:53.089059 | orchestrator | Monday 23 March 2026 00:50:21 +0000 (0:00:01.140) 0:00:01.424 ********** 2026-03-23 00:50:53.089066 | orchestrator | changed: [testbed-manager] 2026-03-23 00:50:53.089072 | orchestrator | 2026-03-23 00:50:53.089078 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-23 00:50:53.089083 | orchestrator | Monday 23 March 2026 00:50:23 +0000 (0:00:01.410) 0:00:02.835 ********** 2026-03-23 00:50:53.089090 | orchestrator | changed: [testbed-manager] 2026-03-23 00:50:53.089097 | orchestrator | 2026-03-23 00:50:53.089104 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:50:53.089110 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:50:53.089118 | orchestrator | 2026-03-23 00:50:53.089124 | orchestrator | 2026-03-23 00:50:53.089154 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:50:53.089162 | orchestrator | Monday 23 March 2026 00:50:23 +0000 (0:00:00.564) 0:00:03.399 ********** 2026-03-23 00:50:53.089181 | orchestrator | =============================================================================== 2026-03-23 00:50:53.089188 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.41s 2026-03-23 00:50:53.089192 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.14s 2026-03-23 00:50:53.089196 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.56s 2026-03-23 00:50:53.089200 | orchestrator | 2026-03-23 00:50:53.089204 | orchestrator | 2026-03-23 00:50:53.089208 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-23 00:50:53.089212 | orchestrator | 2026-03-23 00:50:53.089215 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-23 00:50:53.089219 | orchestrator | Monday 23 March 2026 00:50:20 +0000 (0:00:00.273) 0:00:00.273 ********** 2026-03-23 00:50:53.089223 | orchestrator | ok: [testbed-manager] 2026-03-23 00:50:53.089228 | orchestrator | 2026-03-23 00:50:53.089231 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-23 00:50:53.089235 | orchestrator | Monday 23 March 2026 00:50:21 +0000 (0:00:00.965) 0:00:01.238 ********** 2026-03-23 00:50:53.089239 | orchestrator | ok: [testbed-manager] 2026-03-23 00:50:53.089242 | orchestrator | 2026-03-23 00:50:53.089246 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-23 00:50:53.089250 | orchestrator | Monday 23 March 2026 00:50:21 +0000 (0:00:00.509) 0:00:01.747 ********** 2026-03-23 00:50:53.089254 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-23 00:50:53.089257 | orchestrator | 2026-03-23 00:50:53.089261 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-23 00:50:53.089265 | orchestrator | Monday 23 March 2026 00:50:22 +0000 (0:00:00.878) 0:00:02.626 ********** 2026-03-23 00:50:53.089268 | orchestrator | changed: [testbed-manager] 2026-03-23 00:50:53.089272 | orchestrator | 2026-03-23 00:50:53.089276 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-23 00:50:53.089279 | orchestrator | Monday 23 March 2026 00:50:24 +0000 (0:00:01.481) 0:00:04.108 ********** 2026-03-23 00:50:53.089283 | orchestrator | changed: [testbed-manager] 2026-03-23 00:50:53.089287 | orchestrator | 2026-03-23 00:50:53.089290 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-23 00:50:53.089294 | orchestrator | Monday 23 March 2026 00:50:24 +0000 (0:00:00.418) 0:00:04.526 ********** 2026-03-23 00:50:53.089298 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-23 00:50:53.089301 | orchestrator | 2026-03-23 00:50:53.089318 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-23 00:50:53.089348 | orchestrator | Monday 23 March 2026 00:50:26 +0000 (0:00:01.456) 0:00:05.983 ********** 2026-03-23 00:50:53.089355 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-23 00:50:53.089360 | orchestrator | 2026-03-23 00:50:53.089364 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-23 00:50:53.089368 | orchestrator | Monday 23 March 2026 00:50:26 +0000 (0:00:00.772) 0:00:06.756 ********** 2026-03-23 00:50:53.089371 | orchestrator | ok: [testbed-manager] 2026-03-23 00:50:53.089375 | orchestrator | 2026-03-23 00:50:53.089379 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-23 00:50:53.089382 | orchestrator | Monday 23 March 2026 00:50:27 +0000 (0:00:00.374) 0:00:07.130 ********** 2026-03-23 00:50:53.089395 | orchestrator | ok: [testbed-manager] 2026-03-23 00:50:53.089400 | orchestrator | 2026-03-23 00:50:53.089407 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:50:53.089416 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:50:53.089422 | orchestrator | 2026-03-23 00:50:53.089428 | orchestrator | 2026-03-23 00:50:53.089433 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:50:53.089445 | orchestrator | Monday 23 March 2026 00:50:27 +0000 (0:00:00.294) 0:00:07.425 ********** 2026-03-23 00:50:53.089451 | orchestrator | =============================================================================== 2026-03-23 00:50:53.089456 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.48s 2026-03-23 00:50:53.089461 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.46s 2026-03-23 00:50:53.089467 | orchestrator | Get home directory of operator user ------------------------------------- 0.97s 2026-03-23 00:50:53.089490 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.88s 2026-03-23 00:50:53.089497 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.77s 2026-03-23 00:50:53.089503 | orchestrator | Create .kube directory -------------------------------------------------- 0.51s 2026-03-23 00:50:53.089510 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.42s 2026-03-23 00:50:53.089513 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2026-03-23 00:50:53.089517 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.29s 2026-03-23 00:50:53.089521 | orchestrator | 2026-03-23 00:50:53.089524 | orchestrator | 2026-03-23 00:50:53.089528 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-23 00:50:53.089532 | orchestrator | 2026-03-23 00:50:53.089535 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-23 00:50:53.089539 | orchestrator | Monday 23 March 2026 00:48:42 +0000 (0:00:00.210) 0:00:00.210 ********** 2026-03-23 00:50:53.089543 | orchestrator | ok: [localhost] => { 2026-03-23 00:50:53.089547 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-23 00:50:53.089551 | orchestrator | } 2026-03-23 00:50:53.089556 | orchestrator | 2026-03-23 00:50:53.089559 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-23 00:50:53.089563 | orchestrator | Monday 23 March 2026 00:48:42 +0000 (0:00:00.035) 0:00:00.246 ********** 2026-03-23 00:50:53.089573 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-23 00:50:53.089578 | orchestrator | ...ignoring 2026-03-23 00:50:53.089582 | orchestrator | 2026-03-23 00:50:53.089585 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-23 00:50:53.089589 | orchestrator | Monday 23 March 2026 00:48:45 +0000 (0:00:03.127) 0:00:03.374 ********** 2026-03-23 00:50:53.089593 | orchestrator | skipping: [localhost] 2026-03-23 00:50:53.089596 | orchestrator | 2026-03-23 00:50:53.089600 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-23 00:50:53.089604 | orchestrator | Monday 23 March 2026 00:48:45 +0000 (0:00:00.043) 0:00:03.417 ********** 2026-03-23 00:50:53.089608 | orchestrator | ok: [localhost] 2026-03-23 00:50:53.089611 | orchestrator | 2026-03-23 00:50:53.089634 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 00:50:53.089640 | orchestrator | 2026-03-23 00:50:53.089646 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 00:50:53.089651 | orchestrator | Monday 23 March 2026 00:48:46 +0000 (0:00:00.214) 0:00:03.631 ********** 2026-03-23 00:50:53.089657 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:53.089662 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:53.089668 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:53.089673 | orchestrator | 2026-03-23 00:50:53.089679 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 00:50:53.089685 | orchestrator | Monday 23 March 2026 00:48:46 +0000 (0:00:00.286) 0:00:03.917 ********** 2026-03-23 00:50:53.089691 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-23 00:50:53.089695 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-23 00:50:53.089755 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-23 00:50:53.089760 | orchestrator | 2026-03-23 00:50:53.089763 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-23 00:50:53.089767 | orchestrator | 2026-03-23 00:50:53.089771 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-23 00:50:53.089775 | orchestrator | Monday 23 March 2026 00:48:46 +0000 (0:00:00.389) 0:00:04.307 ********** 2026-03-23 00:50:53.089779 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:50:53.089783 | orchestrator | 2026-03-23 00:50:53.089787 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-23 00:50:53.089791 | orchestrator | Monday 23 March 2026 00:48:47 +0000 (0:00:00.499) 0:00:04.807 ********** 2026-03-23 00:50:53.089794 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:53.089798 | orchestrator | 2026-03-23 00:50:53.089802 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-23 00:50:53.089806 | orchestrator | Monday 23 March 2026 00:48:48 +0000 (0:00:01.453) 0:00:06.260 ********** 2026-03-23 00:50:53.089809 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:53.089814 | orchestrator | 2026-03-23 00:50:53.089818 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-23 00:50:53.089821 | orchestrator | Monday 23 March 2026 00:48:49 +0000 (0:00:00.490) 0:00:06.751 ********** 2026-03-23 00:50:53.089825 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:53.089829 | orchestrator | 2026-03-23 00:50:53.089833 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-23 00:50:53.089836 | orchestrator | Monday 23 March 2026 00:48:49 +0000 (0:00:00.468) 0:00:07.220 ********** 2026-03-23 00:50:53.089840 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:53.089844 | orchestrator | 2026-03-23 00:50:53.089847 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-23 00:50:53.089851 | orchestrator | Monday 23 March 2026 00:48:50 +0000 (0:00:00.976) 0:00:08.196 ********** 2026-03-23 00:50:53.089855 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:53.089859 | orchestrator | 2026-03-23 00:50:53.089863 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-23 00:50:53.089866 | orchestrator | Monday 23 March 2026 00:48:51 +0000 (0:00:00.531) 0:00:08.727 ********** 2026-03-23 00:50:53.089870 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:50:53.089874 | orchestrator | 2026-03-23 00:50:53.089878 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-23 00:50:53.089888 | orchestrator | Monday 23 March 2026 00:48:53 +0000 (0:00:02.148) 0:00:10.875 ********** 2026-03-23 00:50:53.089892 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:53.089895 | orchestrator | 2026-03-23 00:50:53.089899 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-23 00:50:53.089903 | orchestrator | Monday 23 March 2026 00:48:55 +0000 (0:00:01.788) 0:00:12.664 ********** 2026-03-23 00:50:53.089906 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:53.089910 | orchestrator | 2026-03-23 00:50:53.089914 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-23 00:50:53.089920 | orchestrator | Monday 23 March 2026 00:48:56 +0000 (0:00:00.971) 0:00:13.636 ********** 2026-03-23 00:50:53.089926 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:53.089932 | orchestrator | 2026-03-23 00:50:53.089938 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-23 00:50:53.089943 | orchestrator | Monday 23 March 2026 00:48:56 +0000 (0:00:00.291) 0:00:13.928 ********** 2026-03-23 00:50:53.089968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-23 00:50:53.089984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-23 00:50:53.089992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-23 00:50:53.089999 | orchestrator | 2026-03-23 00:50:53.090006 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-23 00:50:53.090055 | orchestrator | Monday 23 March 2026 00:48:57 +0000 (0:00:01.062) 0:00:14.990 ********** 2026-03-23 00:50:53.090075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-23 00:50:53.090094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-23 00:50:53.090102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-23 00:50:53.090109 | orchestrator | 2026-03-23 00:50:53.090115 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-23 00:50:53.090121 | orchestrator | Monday 23 March 2026 00:48:58 +0000 (0:00:01.535) 0:00:16.525 ********** 2026-03-23 00:50:53.090127 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-23 00:50:53.090134 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-23 00:50:53.090140 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-23 00:50:53.090147 | orchestrator | 2026-03-23 00:50:53.090154 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-23 00:50:53.090162 | orchestrator | Monday 23 March 2026 00:49:00 +0000 (0:00:01.287) 0:00:17.813 ********** 2026-03-23 00:50:53.090169 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-23 00:50:53.090176 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-23 00:50:53.090183 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-23 00:50:53.090190 | orchestrator | 2026-03-23 00:50:53.090197 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-23 00:50:53.090210 | orchestrator | Monday 23 March 2026 00:49:02 +0000 (0:00:02.474) 0:00:20.287 ********** 2026-03-23 00:50:53.090216 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-23 00:50:53.090223 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-23 00:50:53.090235 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-23 00:50:53.090241 | orchestrator | 2026-03-23 00:50:53.090247 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-23 00:50:53.090253 | orchestrator | Monday 23 March 2026 00:49:03 +0000 (0:00:01.276) 0:00:21.564 ********** 2026-03-23 00:50:53.090259 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-23 00:50:53.090266 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-23 00:50:53.090272 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-23 00:50:53.090279 | orchestrator | 2026-03-23 00:50:53.090286 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-23 00:50:53.090290 | orchestrator | Monday 23 March 2026 00:49:05 +0000 (0:00:01.649) 0:00:23.213 ********** 2026-03-23 00:50:53.090294 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-23 00:50:53.090298 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-23 00:50:53.090305 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-23 00:50:53.090309 | orchestrator | 2026-03-23 00:50:53.090312 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-23 00:50:53.090316 | orchestrator | Monday 23 March 2026 00:49:07 +0000 (0:00:01.428) 0:00:24.641 ********** 2026-03-23 00:50:53.090320 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-23 00:50:53.090323 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-23 00:50:53.090327 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-23 00:50:53.090331 | orchestrator | 2026-03-23 00:50:53.090334 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-23 00:50:53.090338 | orchestrator | Monday 23 March 2026 00:49:08 +0000 (0:00:01.345) 0:00:25.987 ********** 2026-03-23 00:50:53.090342 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:53.090346 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:53.090350 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:53.090354 | orchestrator | 2026-03-23 00:50:53.090357 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-23 00:50:53.090361 | orchestrator | Monday 23 March 2026 00:49:09 +0000 (0:00:00.658) 0:00:26.645 ********** 2026-03-23 00:50:53.090365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-23 00:50:53.090374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-23 00:50:53.090385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-23 00:50:53.090390 | orchestrator | 2026-03-23 00:50:53.090393 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-23 00:50:53.090399 | orchestrator | Monday 23 March 2026 00:49:10 +0000 (0:00:01.234) 0:00:27.880 ********** 2026-03-23 00:50:53.090405 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:53.090411 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:53.090416 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:53.090421 | orchestrator | 2026-03-23 00:50:53.090427 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-23 00:50:53.090433 | orchestrator | Monday 23 March 2026 00:49:11 +0000 (0:00:00.948) 0:00:28.828 ********** 2026-03-23 00:50:53.090439 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:53.090444 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:53.090451 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:53.090456 | orchestrator | 2026-03-23 00:50:53.090462 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-23 00:50:53.090468 | orchestrator | Monday 23 March 2026 00:49:18 +0000 (0:00:07.008) 0:00:35.837 ********** 2026-03-23 00:50:53.090474 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:53.090480 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:53.090486 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:53.090492 | orchestrator | 2026-03-23 00:50:53.090498 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-23 00:50:53.090504 | orchestrator | 2026-03-23 00:50:53.090510 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-23 00:50:53.090515 | orchestrator | Monday 23 March 2026 00:49:18 +0000 (0:00:00.313) 0:00:36.150 ********** 2026-03-23 00:50:53.090521 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:53.090530 | orchestrator | 2026-03-23 00:50:53.090535 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-23 00:50:53.090542 | orchestrator | Monday 23 March 2026 00:49:19 +0000 (0:00:00.550) 0:00:36.700 ********** 2026-03-23 00:50:53.090549 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:50:53.090555 | orchestrator | 2026-03-23 00:50:53.090561 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-23 00:50:53.090573 | orchestrator | Monday 23 March 2026 00:49:19 +0000 (0:00:00.187) 0:00:36.888 ********** 2026-03-23 00:50:53.090579 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:53.090584 | orchestrator | 2026-03-23 00:50:53.090590 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-23 00:50:53.090596 | orchestrator | Monday 23 March 2026 00:49:20 +0000 (0:00:01.564) 0:00:38.453 ********** 2026-03-23 00:50:53.090602 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:50:53.090608 | orchestrator | 2026-03-23 00:50:53.090636 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-23 00:50:53.090640 | orchestrator | 2026-03-23 00:50:53.090644 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-23 00:50:53.090648 | orchestrator | Monday 23 March 2026 00:50:12 +0000 (0:00:51.872) 0:01:30.326 ********** 2026-03-23 00:50:53.090652 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:53.090655 | orchestrator | 2026-03-23 00:50:53.090659 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-23 00:50:53.090663 | orchestrator | Monday 23 March 2026 00:50:13 +0000 (0:00:00.687) 0:01:31.014 ********** 2026-03-23 00:50:53.090667 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:50:53.090670 | orchestrator | 2026-03-23 00:50:53.090684 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-23 00:50:53.090688 | orchestrator | Monday 23 March 2026 00:50:13 +0000 (0:00:00.214) 0:01:31.229 ********** 2026-03-23 00:50:53.090691 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:53.090695 | orchestrator | 2026-03-23 00:50:53.090705 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-23 00:50:53.090710 | orchestrator | Monday 23 March 2026 00:50:15 +0000 (0:00:01.973) 0:01:33.202 ********** 2026-03-23 00:50:53.090716 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:50:53.090722 | orchestrator | 2026-03-23 00:50:53.090728 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-23 00:50:53.090734 | orchestrator | 2026-03-23 00:50:53.090739 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-23 00:50:53.090746 | orchestrator | Monday 23 March 2026 00:50:30 +0000 (0:00:14.779) 0:01:47.982 ********** 2026-03-23 00:50:53.090752 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:53.090757 | orchestrator | 2026-03-23 00:50:53.090770 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-23 00:50:53.090778 | orchestrator | Monday 23 March 2026 00:50:31 +0000 (0:00:00.665) 0:01:48.648 ********** 2026-03-23 00:50:53.090783 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:50:53.090789 | orchestrator | 2026-03-23 00:50:53.090795 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-23 00:50:53.090800 | orchestrator | Monday 23 March 2026 00:50:31 +0000 (0:00:00.176) 0:01:48.824 ********** 2026-03-23 00:50:53.090807 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:53.090813 | orchestrator | 2026-03-23 00:50:53.090819 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-23 00:50:53.090825 | orchestrator | Monday 23 March 2026 00:50:37 +0000 (0:00:06.721) 0:01:55.546 ********** 2026-03-23 00:50:53.090831 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:50:53.090837 | orchestrator | 2026-03-23 00:50:53.090844 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-23 00:50:53.090849 | orchestrator | 2026-03-23 00:50:53.090855 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-23 00:50:53.090862 | orchestrator | Monday 23 March 2026 00:50:49 +0000 (0:00:11.286) 0:02:06.832 ********** 2026-03-23 00:50:53.090869 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:50:53.090875 | orchestrator | 2026-03-23 00:50:53.090882 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-23 00:50:53.090888 | orchestrator | Monday 23 March 2026 00:50:50 +0000 (0:00:00.746) 0:02:07.579 ********** 2026-03-23 00:50:53.090907 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:50:53.090911 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:50:53.090915 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:50:53.090924 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-23 00:50:53.090928 | orchestrator | enable_outward_rabbitmq_True 2026-03-23 00:50:53.090931 | orchestrator | 2026-03-23 00:50:53.090935 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-23 00:50:53.090939 | orchestrator | skipping: no hosts matched 2026-03-23 00:50:53.090942 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-23 00:50:53.090946 | orchestrator | outward_rabbitmq_restart 2026-03-23 00:50:53.090950 | orchestrator | 2026-03-23 00:50:53.090953 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-23 00:50:53.090957 | orchestrator | skipping: no hosts matched 2026-03-23 00:50:53.090961 | orchestrator | 2026-03-23 00:50:53.090965 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-23 00:50:53.090968 | orchestrator | skipping: no hosts matched 2026-03-23 00:50:53.090972 | orchestrator | 2026-03-23 00:50:53.090976 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:50:53.090980 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-23 00:50:53.090985 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-23 00:50:53.090989 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:50:53.090993 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:50:53.090997 | orchestrator | 2026-03-23 00:50:53.091001 | orchestrator | 2026-03-23 00:50:53.091005 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:50:53.091008 | orchestrator | Monday 23 March 2026 00:50:52 +0000 (0:00:02.757) 0:02:10.336 ********** 2026-03-23 00:50:53.091012 | orchestrator | =============================================================================== 2026-03-23 00:50:53.091018 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 77.94s 2026-03-23 00:50:53.091024 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.26s 2026-03-23 00:50:53.091030 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.01s 2026-03-23 00:50:53.091036 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.13s 2026-03-23 00:50:53.091041 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.76s 2026-03-23 00:50:53.091047 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.47s 2026-03-23 00:50:53.091054 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.15s 2026-03-23 00:50:53.091060 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.90s 2026-03-23 00:50:53.091066 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.79s 2026-03-23 00:50:53.091073 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.65s 2026-03-23 00:50:53.091079 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.54s 2026-03-23 00:50:53.091086 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.45s 2026-03-23 00:50:53.091092 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.43s 2026-03-23 00:50:53.091098 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.35s 2026-03-23 00:50:53.091104 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.29s 2026-03-23 00:50:53.091117 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.28s 2026-03-23 00:50:53.091124 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.23s 2026-03-23 00:50:53.091136 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.06s 2026-03-23 00:50:53.091144 | orchestrator | rabbitmq : Check if running RabbitMQ is at most one version behind ------ 0.98s 2026-03-23 00:50:53.091150 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 0.97s 2026-03-23 00:50:53.091156 | orchestrator | 2026-03-23 00:50:53 | INFO  | Task 4ba54af8-526c-403c-89ff-3df825408037 is in state SUCCESS 2026-03-23 00:50:53.091162 | orchestrator | 2026-03-23 00:50:53 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:53.092780 | orchestrator | 2026-03-23 00:50:53 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:53.093582 | orchestrator | 2026-03-23 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:56.129878 | orchestrator | 2026-03-23 00:50:56 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:56.131014 | orchestrator | 2026-03-23 00:50:56 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:56.132836 | orchestrator | 2026-03-23 00:50:56 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:56.132867 | orchestrator | 2026-03-23 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:50:59.155802 | orchestrator | 2026-03-23 00:50:59 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:50:59.156267 | orchestrator | 2026-03-23 00:50:59 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:50:59.156813 | orchestrator | 2026-03-23 00:50:59 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:50:59.156853 | orchestrator | 2026-03-23 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:02.185180 | orchestrator | 2026-03-23 00:51:02 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:02.187390 | orchestrator | 2026-03-23 00:51:02 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:51:02.189416 | orchestrator | 2026-03-23 00:51:02 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:02.189495 | orchestrator | 2026-03-23 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:05.224447 | orchestrator | 2026-03-23 00:51:05 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:05.225147 | orchestrator | 2026-03-23 00:51:05 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:51:05.227746 | orchestrator | 2026-03-23 00:51:05 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:05.227791 | orchestrator | 2026-03-23 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:08.251402 | orchestrator | 2026-03-23 00:51:08 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:08.252952 | orchestrator | 2026-03-23 00:51:08 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:51:08.254480 | orchestrator | 2026-03-23 00:51:08 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:08.254526 | orchestrator | 2026-03-23 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:11.287233 | orchestrator | 2026-03-23 00:51:11 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:11.289941 | orchestrator | 2026-03-23 00:51:11 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:51:11.291709 | orchestrator | 2026-03-23 00:51:11 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:11.291737 | orchestrator | 2026-03-23 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:14.342224 | orchestrator | 2026-03-23 00:51:14 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:14.343943 | orchestrator | 2026-03-23 00:51:14 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:51:14.345214 | orchestrator | 2026-03-23 00:51:14 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:14.345535 | orchestrator | 2026-03-23 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:17.393389 | orchestrator | 2026-03-23 00:51:17 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:17.395033 | orchestrator | 2026-03-23 00:51:17 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:51:17.397435 | orchestrator | 2026-03-23 00:51:17 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:17.397752 | orchestrator | 2026-03-23 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:20.436962 | orchestrator | 2026-03-23 00:51:20 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:20.440195 | orchestrator | 2026-03-23 00:51:20 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:51:20.442672 | orchestrator | 2026-03-23 00:51:20 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:20.442746 | orchestrator | 2026-03-23 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:23.469957 | orchestrator | 2026-03-23 00:51:23 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:23.470694 | orchestrator | 2026-03-23 00:51:23 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:51:23.471723 | orchestrator | 2026-03-23 00:51:23 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:23.471946 | orchestrator | 2026-03-23 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:26.497872 | orchestrator | 2026-03-23 00:51:26 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:26.498615 | orchestrator | 2026-03-23 00:51:26 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state STARTED 2026-03-23 00:51:26.499280 | orchestrator | 2026-03-23 00:51:26 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:26.499319 | orchestrator | 2026-03-23 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:29.530824 | orchestrator | 2026-03-23 00:51:29 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:29.534535 | orchestrator | 2026-03-23 00:51:29 | INFO  | Task 41d89711-990e-46de-a0e1-0b6530f71414 is in state SUCCESS 2026-03-23 00:51:29.535306 | orchestrator | 2026-03-23 00:51:29.535339 | orchestrator | 2026-03-23 00:51:29.535344 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 00:51:29.535350 | orchestrator | 2026-03-23 00:51:29.535354 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 00:51:29.535360 | orchestrator | Monday 23 March 2026 00:49:27 +0000 (0:00:00.162) 0:00:00.162 ********** 2026-03-23 00:51:29.535364 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:51:29.535387 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:51:29.535391 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:51:29.535396 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.535400 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.535405 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.535409 | orchestrator | 2026-03-23 00:51:29.535413 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 00:51:29.535418 | orchestrator | Monday 23 March 2026 00:49:27 +0000 (0:00:00.714) 0:00:00.877 ********** 2026-03-23 00:51:29.535423 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-23 00:51:29.535428 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-23 00:51:29.535433 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-23 00:51:29.535437 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-23 00:51:29.535442 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-23 00:51:29.535446 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-23 00:51:29.535452 | orchestrator | 2026-03-23 00:51:29.535459 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-23 00:51:29.535465 | orchestrator | 2026-03-23 00:51:29.535476 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-23 00:51:29.535485 | orchestrator | Monday 23 March 2026 00:49:29 +0000 (0:00:01.120) 0:00:01.997 ********** 2026-03-23 00:51:29.535494 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:51:29.535551 | orchestrator | 2026-03-23 00:51:29.535623 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-23 00:51:29.535630 | orchestrator | Monday 23 March 2026 00:49:30 +0000 (0:00:01.043) 0:00:03.041 ********** 2026-03-23 00:51:29.535638 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535678 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535720 | orchestrator | 2026-03-23 00:51:29.535734 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-23 00:51:29.535739 | orchestrator | Monday 23 March 2026 00:49:31 +0000 (0:00:01.183) 0:00:04.224 ********** 2026-03-23 00:51:29.535743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535770 | orchestrator | 2026-03-23 00:51:29.535774 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-23 00:51:29.535778 | orchestrator | Monday 23 March 2026 00:49:32 +0000 (0:00:01.277) 0:00:05.502 ********** 2026-03-23 00:51:29.535791 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535809 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535822 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535841 | orchestrator | 2026-03-23 00:51:29.535845 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-23 00:51:29.535849 | orchestrator | Monday 23 March 2026 00:49:33 +0000 (0:00:01.117) 0:00:06.619 ********** 2026-03-23 00:51:29.535868 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535877 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535899 | orchestrator | 2026-03-23 00:51:29.535906 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-23 00:51:29.535911 | orchestrator | Monday 23 March 2026 00:49:35 +0000 (0:00:01.814) 0:00:08.433 ********** 2026-03-23 00:51:29.535917 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.535994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.536003 | orchestrator | 2026-03-23 00:51:29.536008 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-23 00:51:29.536013 | orchestrator | Monday 23 March 2026 00:49:36 +0000 (0:00:01.439) 0:00:09.872 ********** 2026-03-23 00:51:29.536019 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:51:29.536024 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:51:29.536029 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:51:29.536034 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:51:29.536039 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:51:29.536043 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:51:29.536048 | orchestrator | 2026-03-23 00:51:29.536053 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-23 00:51:29.536058 | orchestrator | Monday 23 March 2026 00:49:39 +0000 (0:00:02.798) 0:00:12.671 ********** 2026-03-23 00:51:29.536063 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-23 00:51:29.536071 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-23 00:51:29.536082 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-23 00:51:29.536093 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-23 00:51:29.536101 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-23 00:51:29.536107 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-23 00:51:29.536115 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-23 00:51:29.536122 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-23 00:51:29.536135 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-23 00:51:29.536142 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-23 00:51:29.536149 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-23 00:51:29.536156 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-23 00:51:29.536163 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-23 00:51:29.536172 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-23 00:51:29.536179 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-23 00:51:29.536186 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-23 00:51:29.536193 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-23 00:51:29.536199 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-23 00:51:29.536206 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-23 00:51:29.536213 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-23 00:51:29.536219 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-23 00:51:29.536225 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-23 00:51:29.536240 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-23 00:51:29.536246 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-23 00:51:29.536253 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-23 00:51:29.536260 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-23 00:51:29.536267 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-23 00:51:29.536273 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-23 00:51:29.536280 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-23 00:51:29.536286 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-23 00:51:29.536293 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-23 00:51:29.536300 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-23 00:51:29.536307 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-23 00:51:29.536314 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-23 00:51:29.536320 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-23 00:51:29.536327 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-23 00:51:29.536333 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-23 00:51:29.536340 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-23 00:51:29.536348 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-23 00:51:29.536354 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-23 00:51:29.536367 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-23 00:51:29.536373 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-23 00:51:29.536381 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-23 00:51:29.536389 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-23 00:51:29.536401 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-23 00:51:29.536409 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-23 00:51:29.536416 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-23 00:51:29.536422 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-23 00:51:29.536429 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-23 00:51:29.536436 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-23 00:51:29.536443 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-23 00:51:29.536456 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-23 00:51:29.536463 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-23 00:51:29.536469 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-23 00:51:29.536476 | orchestrator | 2026-03-23 00:51:29.536483 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-23 00:51:29.536489 | orchestrator | Monday 23 March 2026 00:49:57 +0000 (0:00:17.366) 0:00:30.037 ********** 2026-03-23 00:51:29.536496 | orchestrator | 2026-03-23 00:51:29.536503 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-23 00:51:29.536510 | orchestrator | Monday 23 March 2026 00:49:57 +0000 (0:00:00.074) 0:00:30.112 ********** 2026-03-23 00:51:29.536517 | orchestrator | 2026-03-23 00:51:29.536523 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-23 00:51:29.536531 | orchestrator | Monday 23 March 2026 00:49:57 +0000 (0:00:00.061) 0:00:30.173 ********** 2026-03-23 00:51:29.536537 | orchestrator | 2026-03-23 00:51:29.536544 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-23 00:51:29.536552 | orchestrator | Monday 23 March 2026 00:49:57 +0000 (0:00:00.061) 0:00:30.234 ********** 2026-03-23 00:51:29.536578 | orchestrator | 2026-03-23 00:51:29.536585 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-23 00:51:29.536593 | orchestrator | Monday 23 March 2026 00:49:57 +0000 (0:00:00.056) 0:00:30.291 ********** 2026-03-23 00:51:29.536597 | orchestrator | 2026-03-23 00:51:29.536602 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-23 00:51:29.536606 | orchestrator | Monday 23 March 2026 00:49:57 +0000 (0:00:00.059) 0:00:30.350 ********** 2026-03-23 00:51:29.536610 | orchestrator | 2026-03-23 00:51:29.536614 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-23 00:51:29.536619 | orchestrator | Monday 23 March 2026 00:49:57 +0000 (0:00:00.060) 0:00:30.411 ********** 2026-03-23 00:51:29.536623 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:51:29.536628 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.536632 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:51:29.536637 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:51:29.536641 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.536645 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.536650 | orchestrator | 2026-03-23 00:51:29.536654 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-23 00:51:29.536658 | orchestrator | Monday 23 March 2026 00:49:59 +0000 (0:00:01.965) 0:00:32.377 ********** 2026-03-23 00:51:29.536663 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:51:29.536667 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:51:29.536672 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:51:29.536676 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:51:29.536680 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:51:29.536685 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:51:29.536689 | orchestrator | 2026-03-23 00:51:29.536693 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-23 00:51:29.536697 | orchestrator | 2026-03-23 00:51:29.536702 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-23 00:51:29.536706 | orchestrator | Monday 23 March 2026 00:50:26 +0000 (0:00:27.046) 0:00:59.423 ********** 2026-03-23 00:51:29.536710 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:51:29.536715 | orchestrator | 2026-03-23 00:51:29.536719 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-23 00:51:29.536724 | orchestrator | Monday 23 March 2026 00:50:26 +0000 (0:00:00.479) 0:00:59.902 ********** 2026-03-23 00:51:29.536737 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:51:29.536741 | orchestrator | 2026-03-23 00:51:29.536746 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-23 00:51:29.536750 | orchestrator | Monday 23 March 2026 00:50:27 +0000 (0:00:00.487) 0:01:00.390 ********** 2026-03-23 00:51:29.536754 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.536759 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.536763 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.536767 | orchestrator | 2026-03-23 00:51:29.536772 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-23 00:51:29.536776 | orchestrator | Monday 23 March 2026 00:50:28 +0000 (0:00:00.811) 0:01:01.202 ********** 2026-03-23 00:51:29.536780 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.536784 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.536789 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.536796 | orchestrator | 2026-03-23 00:51:29.536801 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-23 00:51:29.536805 | orchestrator | Monday 23 March 2026 00:50:28 +0000 (0:00:00.282) 0:01:01.485 ********** 2026-03-23 00:51:29.536810 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.536814 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.536818 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.536823 | orchestrator | 2026-03-23 00:51:29.536827 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-23 00:51:29.536831 | orchestrator | Monday 23 March 2026 00:50:28 +0000 (0:00:00.390) 0:01:01.875 ********** 2026-03-23 00:51:29.536836 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.536840 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.536844 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.536848 | orchestrator | 2026-03-23 00:51:29.536853 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-23 00:51:29.536857 | orchestrator | Monday 23 March 2026 00:50:29 +0000 (0:00:00.259) 0:01:02.135 ********** 2026-03-23 00:51:29.536861 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.536866 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.536870 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.536874 | orchestrator | 2026-03-23 00:51:29.536878 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-23 00:51:29.536883 | orchestrator | Monday 23 March 2026 00:50:29 +0000 (0:00:00.264) 0:01:02.400 ********** 2026-03-23 00:51:29.536887 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.536891 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.536896 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.536900 | orchestrator | 2026-03-23 00:51:29.536904 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-23 00:51:29.536909 | orchestrator | Monday 23 March 2026 00:50:29 +0000 (0:00:00.238) 0:01:02.638 ********** 2026-03-23 00:51:29.536913 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.536917 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.536921 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.536926 | orchestrator | 2026-03-23 00:51:29.536930 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-23 00:51:29.536934 | orchestrator | Monday 23 March 2026 00:50:29 +0000 (0:00:00.249) 0:01:02.888 ********** 2026-03-23 00:51:29.536939 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.536943 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.536947 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.536952 | orchestrator | 2026-03-23 00:51:29.536956 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-23 00:51:29.536960 | orchestrator | Monday 23 March 2026 00:50:30 +0000 (0:00:00.463) 0:01:03.351 ********** 2026-03-23 00:51:29.536964 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.536969 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.536976 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.536981 | orchestrator | 2026-03-23 00:51:29.536985 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-23 00:51:29.536989 | orchestrator | Monday 23 March 2026 00:50:30 +0000 (0:00:00.264) 0:01:03.615 ********** 2026-03-23 00:51:29.536994 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.536998 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537002 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537006 | orchestrator | 2026-03-23 00:51:29.537011 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-23 00:51:29.537015 | orchestrator | Monday 23 March 2026 00:50:30 +0000 (0:00:00.214) 0:01:03.830 ********** 2026-03-23 00:51:29.537019 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537024 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537028 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537032 | orchestrator | 2026-03-23 00:51:29.537036 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-23 00:51:29.537041 | orchestrator | Monday 23 March 2026 00:50:31 +0000 (0:00:00.239) 0:01:04.069 ********** 2026-03-23 00:51:29.537045 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537049 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537054 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537058 | orchestrator | 2026-03-23 00:51:29.537062 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-23 00:51:29.537066 | orchestrator | Monday 23 March 2026 00:50:31 +0000 (0:00:00.439) 0:01:04.508 ********** 2026-03-23 00:51:29.537071 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537075 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537079 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537084 | orchestrator | 2026-03-23 00:51:29.537088 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-23 00:51:29.537092 | orchestrator | Monday 23 March 2026 00:50:31 +0000 (0:00:00.274) 0:01:04.782 ********** 2026-03-23 00:51:29.537097 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537101 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537105 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537109 | orchestrator | 2026-03-23 00:51:29.537114 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-23 00:51:29.537118 | orchestrator | Monday 23 March 2026 00:50:32 +0000 (0:00:00.244) 0:01:05.027 ********** 2026-03-23 00:51:29.537122 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537129 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537134 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537138 | orchestrator | 2026-03-23 00:51:29.537142 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-23 00:51:29.537147 | orchestrator | Monday 23 March 2026 00:50:32 +0000 (0:00:00.261) 0:01:05.288 ********** 2026-03-23 00:51:29.537151 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537155 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537160 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537164 | orchestrator | 2026-03-23 00:51:29.537168 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-23 00:51:29.537172 | orchestrator | Monday 23 March 2026 00:50:32 +0000 (0:00:00.372) 0:01:05.661 ********** 2026-03-23 00:51:29.537177 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537181 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537188 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537192 | orchestrator | 2026-03-23 00:51:29.537197 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-23 00:51:29.537201 | orchestrator | Monday 23 March 2026 00:50:32 +0000 (0:00:00.252) 0:01:05.913 ********** 2026-03-23 00:51:29.537205 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:51:29.537213 | orchestrator | 2026-03-23 00:51:29.537218 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-23 00:51:29.537222 | orchestrator | Monday 23 March 2026 00:50:33 +0000 (0:00:00.482) 0:01:06.395 ********** 2026-03-23 00:51:29.537227 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.537231 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.537235 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.537239 | orchestrator | 2026-03-23 00:51:29.537244 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-23 00:51:29.537248 | orchestrator | Monday 23 March 2026 00:50:34 +0000 (0:00:00.631) 0:01:07.027 ********** 2026-03-23 00:51:29.537252 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.537257 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.537261 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.537265 | orchestrator | 2026-03-23 00:51:29.537270 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-23 00:51:29.537274 | orchestrator | Monday 23 March 2026 00:50:34 +0000 (0:00:00.389) 0:01:07.417 ********** 2026-03-23 00:51:29.537278 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537283 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537287 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537291 | orchestrator | 2026-03-23 00:51:29.537295 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-23 00:51:29.537300 | orchestrator | Monday 23 March 2026 00:50:34 +0000 (0:00:00.289) 0:01:07.707 ********** 2026-03-23 00:51:29.537304 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537308 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537313 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537317 | orchestrator | 2026-03-23 00:51:29.537321 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-23 00:51:29.537326 | orchestrator | Monday 23 March 2026 00:50:35 +0000 (0:00:00.275) 0:01:07.982 ********** 2026-03-23 00:51:29.537330 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537334 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537339 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537343 | orchestrator | 2026-03-23 00:51:29.537347 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-23 00:51:29.537351 | orchestrator | Monday 23 March 2026 00:50:35 +0000 (0:00:00.383) 0:01:08.365 ********** 2026-03-23 00:51:29.537356 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537360 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537364 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537369 | orchestrator | 2026-03-23 00:51:29.537373 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-23 00:51:29.537377 | orchestrator | Monday 23 March 2026 00:50:35 +0000 (0:00:00.281) 0:01:08.647 ********** 2026-03-23 00:51:29.537382 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537386 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537390 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537394 | orchestrator | 2026-03-23 00:51:29.537399 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-23 00:51:29.537403 | orchestrator | Monday 23 March 2026 00:50:35 +0000 (0:00:00.256) 0:01:08.903 ********** 2026-03-23 00:51:29.537407 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537411 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.537416 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.537420 | orchestrator | 2026-03-23 00:51:29.537424 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-23 00:51:29.537429 | orchestrator | Monday 23 March 2026 00:50:36 +0000 (0:00:00.257) 0:01:09.161 ********** 2026-03-23 00:51:29.537435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537722 | orchestrator | 2026-03-23 00:51:29.537726 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-23 00:51:29.537731 | orchestrator | Monday 23 March 2026 00:50:37 +0000 (0:00:01.417) 0:01:10.579 ********** 2026-03-23 00:51:29.537735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537790 | orchestrator | 2026-03-23 00:51:29.537794 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-23 00:51:29.537799 | orchestrator | Monday 23 March 2026 00:50:41 +0000 (0:00:03.931) 0:01:14.510 ********** 2026-03-23 00:51:29.537803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.537853 | orchestrator | 2026-03-23 00:51:29.537857 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-23 00:51:29.537861 | orchestrator | Monday 23 March 2026 00:50:43 +0000 (0:00:02.144) 0:01:16.654 ********** 2026-03-23 00:51:29.537866 | orchestrator | 2026-03-23 00:51:29.537870 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-23 00:51:29.537874 | orchestrator | Monday 23 March 2026 00:50:43 +0000 (0:00:00.055) 0:01:16.710 ********** 2026-03-23 00:51:29.537878 | orchestrator | 2026-03-23 00:51:29.537883 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-23 00:51:29.537891 | orchestrator | Monday 23 March 2026 00:50:43 +0000 (0:00:00.059) 0:01:16.769 ********** 2026-03-23 00:51:29.537896 | orchestrator | 2026-03-23 00:51:29.537900 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-23 00:51:29.537904 | orchestrator | Monday 23 March 2026 00:50:43 +0000 (0:00:00.062) 0:01:16.831 ********** 2026-03-23 00:51:29.537909 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:51:29.537913 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:51:29.537917 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:51:29.537921 | orchestrator | 2026-03-23 00:51:29.537926 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-23 00:51:29.537930 | orchestrator | Monday 23 March 2026 00:50:46 +0000 (0:00:02.391) 0:01:19.222 ********** 2026-03-23 00:51:29.537934 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:51:29.537938 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:51:29.537943 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:51:29.537947 | orchestrator | 2026-03-23 00:51:29.537951 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-23 00:51:29.537955 | orchestrator | Monday 23 March 2026 00:50:49 +0000 (0:00:02.753) 0:01:21.976 ********** 2026-03-23 00:51:29.537960 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:51:29.537964 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:51:29.537968 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:51:29.537972 | orchestrator | 2026-03-23 00:51:29.537977 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-23 00:51:29.537981 | orchestrator | Monday 23 March 2026 00:50:52 +0000 (0:00:03.289) 0:01:25.265 ********** 2026-03-23 00:51:29.537985 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.537990 | orchestrator | 2026-03-23 00:51:29.537994 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-23 00:51:29.537998 | orchestrator | Monday 23 March 2026 00:50:52 +0000 (0:00:00.106) 0:01:25.372 ********** 2026-03-23 00:51:29.538002 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.538007 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.538011 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.538065 | orchestrator | 2026-03-23 00:51:29.538071 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-23 00:51:29.538076 | orchestrator | Monday 23 March 2026 00:50:53 +0000 (0:00:00.858) 0:01:26.231 ********** 2026-03-23 00:51:29.538080 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.538084 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.538089 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:51:29.538093 | orchestrator | 2026-03-23 00:51:29.538097 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-23 00:51:29.538105 | orchestrator | Monday 23 March 2026 00:50:53 +0000 (0:00:00.598) 0:01:26.829 ********** 2026-03-23 00:51:29.538112 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.538119 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.538126 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.538132 | orchestrator | 2026-03-23 00:51:29.538138 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-23 00:51:29.538144 | orchestrator | Monday 23 March 2026 00:50:54 +0000 (0:00:00.830) 0:01:27.659 ********** 2026-03-23 00:51:29.538150 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.538157 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.538164 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:51:29.538170 | orchestrator | 2026-03-23 00:51:29.538177 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-23 00:51:29.538184 | orchestrator | Monday 23 March 2026 00:50:55 +0000 (0:00:00.516) 0:01:28.175 ********** 2026-03-23 00:51:29.538191 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.538198 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.538208 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.538216 | orchestrator | 2026-03-23 00:51:29.538223 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-23 00:51:29.538255 | orchestrator | Monday 23 March 2026 00:50:55 +0000 (0:00:00.659) 0:01:28.834 ********** 2026-03-23 00:51:29.538262 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.538273 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.538281 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.538289 | orchestrator | 2026-03-23 00:51:29.538296 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-23 00:51:29.538303 | orchestrator | Monday 23 March 2026 00:50:56 +0000 (0:00:00.644) 0:01:29.479 ********** 2026-03-23 00:51:29.538310 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.538318 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.538325 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.538331 | orchestrator | 2026-03-23 00:51:29.538338 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-23 00:51:29.538346 | orchestrator | Monday 23 March 2026 00:50:56 +0000 (0:00:00.398) 0:01:29.877 ********** 2026-03-23 00:51:29.538354 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538362 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538370 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538379 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538388 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538396 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538404 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538417 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538436 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538445 | orchestrator | 2026-03-23 00:51:29.538453 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-23 00:51:29.538460 | orchestrator | Monday 23 March 2026 00:50:58 +0000 (0:00:01.467) 0:01:31.344 ********** 2026-03-23 00:51:29.538467 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538475 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538483 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538490 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538515 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538543 | orchestrator | 2026-03-23 00:51:29.538548 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-23 00:51:29.538553 | orchestrator | Monday 23 March 2026 00:51:02 +0000 (0:00:04.056) 0:01:35.401 ********** 2026-03-23 00:51:29.538610 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538616 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538622 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538637 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538654 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 00:51:29.538659 | orchestrator | 2026-03-23 00:51:29.538663 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-23 00:51:29.538671 | orchestrator | Monday 23 March 2026 00:51:05 +0000 (0:00:02.779) 0:01:38.181 ********** 2026-03-23 00:51:29.538675 | orchestrator | 2026-03-23 00:51:29.538679 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-23 00:51:29.538684 | orchestrator | Monday 23 March 2026 00:51:05 +0000 (0:00:00.059) 0:01:38.240 ********** 2026-03-23 00:51:29.538688 | orchestrator | 2026-03-23 00:51:29.538692 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-23 00:51:29.538697 | orchestrator | Monday 23 March 2026 00:51:05 +0000 (0:00:00.057) 0:01:38.298 ********** 2026-03-23 00:51:29.538701 | orchestrator | 2026-03-23 00:51:29.538705 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-23 00:51:29.538709 | orchestrator | Monday 23 March 2026 00:51:05 +0000 (0:00:00.166) 0:01:38.465 ********** 2026-03-23 00:51:29.538714 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:51:29.538718 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:51:29.538722 | orchestrator | 2026-03-23 00:51:29.538729 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-23 00:51:29.538734 | orchestrator | Monday 23 March 2026 00:51:11 +0000 (0:00:06.080) 0:01:44.545 ********** 2026-03-23 00:51:29.538738 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:51:29.538742 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:51:29.538747 | orchestrator | 2026-03-23 00:51:29.538751 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-23 00:51:29.538755 | orchestrator | Monday 23 March 2026 00:51:17 +0000 (0:00:06.084) 0:01:50.629 ********** 2026-03-23 00:51:29.538760 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:51:29.538764 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:51:29.538768 | orchestrator | 2026-03-23 00:51:29.538772 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-23 00:51:29.538777 | orchestrator | Monday 23 March 2026 00:51:23 +0000 (0:00:06.257) 0:01:56.886 ********** 2026-03-23 00:51:29.538781 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:51:29.538785 | orchestrator | 2026-03-23 00:51:29.538789 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-23 00:51:29.538793 | orchestrator | Monday 23 March 2026 00:51:24 +0000 (0:00:00.113) 0:01:57.000 ********** 2026-03-23 00:51:29.538797 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.538802 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.538806 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.538810 | orchestrator | 2026-03-23 00:51:29.538814 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-23 00:51:29.538818 | orchestrator | Monday 23 March 2026 00:51:24 +0000 (0:00:00.768) 0:01:57.769 ********** 2026-03-23 00:51:29.538822 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.538826 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.538830 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:51:29.538834 | orchestrator | 2026-03-23 00:51:29.538838 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-23 00:51:29.538842 | orchestrator | Monday 23 March 2026 00:51:25 +0000 (0:00:00.682) 0:01:58.452 ********** 2026-03-23 00:51:29.538846 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.538850 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.538854 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.538858 | orchestrator | 2026-03-23 00:51:29.538863 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-23 00:51:29.538872 | orchestrator | Monday 23 March 2026 00:51:26 +0000 (0:00:00.802) 0:01:59.255 ********** 2026-03-23 00:51:29.538876 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:51:29.538880 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:51:29.538884 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:51:29.538888 | orchestrator | 2026-03-23 00:51:29.538892 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-23 00:51:29.538896 | orchestrator | Monday 23 March 2026 00:51:26 +0000 (0:00:00.622) 0:01:59.877 ********** 2026-03-23 00:51:29.538900 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.538904 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.538908 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.538912 | orchestrator | 2026-03-23 00:51:29.538916 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-23 00:51:29.538921 | orchestrator | Monday 23 March 2026 00:51:27 +0000 (0:00:00.728) 0:02:00.606 ********** 2026-03-23 00:51:29.538925 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:51:29.538929 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:51:29.538933 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:51:29.538937 | orchestrator | 2026-03-23 00:51:29.538941 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:51:29.538945 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-23 00:51:29.538950 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-23 00:51:29.538954 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-23 00:51:29.538958 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:51:29.538963 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:51:29.538967 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:51:29.538971 | orchestrator | 2026-03-23 00:51:29.538975 | orchestrator | 2026-03-23 00:51:29.538979 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:51:29.538983 | orchestrator | Monday 23 March 2026 00:51:28 +0000 (0:00:01.134) 0:02:01.740 ********** 2026-03-23 00:51:29.538990 | orchestrator | =============================================================================== 2026-03-23 00:51:29.538994 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 27.05s 2026-03-23 00:51:29.538998 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.37s 2026-03-23 00:51:29.539002 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.55s 2026-03-23 00:51:29.539006 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.84s 2026-03-23 00:51:29.539010 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.47s 2026-03-23 00:51:29.539014 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.06s 2026-03-23 00:51:29.539018 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.93s 2026-03-23 00:51:29.539024 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.80s 2026-03-23 00:51:29.539029 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.78s 2026-03-23 00:51:29.539033 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.14s 2026-03-23 00:51:29.539037 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.97s 2026-03-23 00:51:29.539041 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.82s 2026-03-23 00:51:29.539049 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.47s 2026-03-23 00:51:29.539055 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.44s 2026-03-23 00:51:29.539062 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.42s 2026-03-23 00:51:29.539068 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.28s 2026-03-23 00:51:29.539075 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.18s 2026-03-23 00:51:29.539081 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.13s 2026-03-23 00:51:29.539092 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.12s 2026-03-23 00:51:29.539100 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.12s 2026-03-23 00:51:29.539107 | orchestrator | 2026-03-23 00:51:29 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:29.539113 | orchestrator | 2026-03-23 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:32.574786 | orchestrator | 2026-03-23 00:51:32 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:32.574969 | orchestrator | 2026-03-23 00:51:32 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:32.574985 | orchestrator | 2026-03-23 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:35.609506 | orchestrator | 2026-03-23 00:51:35 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:35.610991 | orchestrator | 2026-03-23 00:51:35 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:35.611050 | orchestrator | 2026-03-23 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:38.650480 | orchestrator | 2026-03-23 00:51:38 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:38.651468 | orchestrator | 2026-03-23 00:51:38 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:38.651524 | orchestrator | 2026-03-23 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:41.695045 | orchestrator | 2026-03-23 00:51:41 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:41.696971 | orchestrator | 2026-03-23 00:51:41 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:41.697224 | orchestrator | 2026-03-23 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:44.746413 | orchestrator | 2026-03-23 00:51:44 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:44.748638 | orchestrator | 2026-03-23 00:51:44 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:44.748713 | orchestrator | 2026-03-23 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:47.795089 | orchestrator | 2026-03-23 00:51:47 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:47.797507 | orchestrator | 2026-03-23 00:51:47 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:47.797565 | orchestrator | 2026-03-23 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:50.841964 | orchestrator | 2026-03-23 00:51:50 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:50.843930 | orchestrator | 2026-03-23 00:51:50 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:50.843989 | orchestrator | 2026-03-23 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:53.886178 | orchestrator | 2026-03-23 00:51:53 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:53.888652 | orchestrator | 2026-03-23 00:51:53 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:53.888706 | orchestrator | 2026-03-23 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:56.938636 | orchestrator | 2026-03-23 00:51:56 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:56.941089 | orchestrator | 2026-03-23 00:51:56 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:56.941235 | orchestrator | 2026-03-23 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:51:59.988882 | orchestrator | 2026-03-23 00:51:59 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:51:59.988967 | orchestrator | 2026-03-23 00:51:59 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:51:59.988976 | orchestrator | 2026-03-23 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:03.036955 | orchestrator | 2026-03-23 00:52:03 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:03.037040 | orchestrator | 2026-03-23 00:52:03 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:03.037051 | orchestrator | 2026-03-23 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:06.072818 | orchestrator | 2026-03-23 00:52:06 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:06.073070 | orchestrator | 2026-03-23 00:52:06 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:06.074109 | orchestrator | 2026-03-23 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:09.106323 | orchestrator | 2026-03-23 00:52:09 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:09.106461 | orchestrator | 2026-03-23 00:52:09 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:09.106475 | orchestrator | 2026-03-23 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:12.138621 | orchestrator | 2026-03-23 00:52:12 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:12.140653 | orchestrator | 2026-03-23 00:52:12 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:12.140735 | orchestrator | 2026-03-23 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:15.176001 | orchestrator | 2026-03-23 00:52:15 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:15.176791 | orchestrator | 2026-03-23 00:52:15 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:15.176831 | orchestrator | 2026-03-23 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:18.215236 | orchestrator | 2026-03-23 00:52:18 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:18.215783 | orchestrator | 2026-03-23 00:52:18 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:18.215802 | orchestrator | 2026-03-23 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:21.250362 | orchestrator | 2026-03-23 00:52:21 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:21.252059 | orchestrator | 2026-03-23 00:52:21 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:21.252121 | orchestrator | 2026-03-23 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:24.279531 | orchestrator | 2026-03-23 00:52:24 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:24.280167 | orchestrator | 2026-03-23 00:52:24 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:24.280186 | orchestrator | 2026-03-23 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:27.312582 | orchestrator | 2026-03-23 00:52:27 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:27.312638 | orchestrator | 2026-03-23 00:52:27 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:27.312683 | orchestrator | 2026-03-23 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:30.344079 | orchestrator | 2026-03-23 00:52:30 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:30.344553 | orchestrator | 2026-03-23 00:52:30 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:30.344599 | orchestrator | 2026-03-23 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:33.389467 | orchestrator | 2026-03-23 00:52:33 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:33.391120 | orchestrator | 2026-03-23 00:52:33 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:33.391203 | orchestrator | 2026-03-23 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:36.435661 | orchestrator | 2026-03-23 00:52:36 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:36.437135 | orchestrator | 2026-03-23 00:52:36 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:36.437257 | orchestrator | 2026-03-23 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:39.482508 | orchestrator | 2026-03-23 00:52:39 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:39.484249 | orchestrator | 2026-03-23 00:52:39 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:39.484371 | orchestrator | 2026-03-23 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:42.528395 | orchestrator | 2026-03-23 00:52:42 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:42.530183 | orchestrator | 2026-03-23 00:52:42 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:42.530304 | orchestrator | 2026-03-23 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:45.573172 | orchestrator | 2026-03-23 00:52:45 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:45.577482 | orchestrator | 2026-03-23 00:52:45 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:45.577542 | orchestrator | 2026-03-23 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:48.624372 | orchestrator | 2026-03-23 00:52:48 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:48.624682 | orchestrator | 2026-03-23 00:52:48 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:48.624711 | orchestrator | 2026-03-23 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:51.669654 | orchestrator | 2026-03-23 00:52:51 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:51.672228 | orchestrator | 2026-03-23 00:52:51 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:51.672297 | orchestrator | 2026-03-23 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:54.724765 | orchestrator | 2026-03-23 00:52:54 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:54.724904 | orchestrator | 2026-03-23 00:52:54 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:54.724922 | orchestrator | 2026-03-23 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:52:57.770191 | orchestrator | 2026-03-23 00:52:57 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:52:57.771088 | orchestrator | 2026-03-23 00:52:57 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:52:57.771145 | orchestrator | 2026-03-23 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:00.816036 | orchestrator | 2026-03-23 00:53:00 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:00.817966 | orchestrator | 2026-03-23 00:53:00 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:00.818058 | orchestrator | 2026-03-23 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:03.858521 | orchestrator | 2026-03-23 00:53:03 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:03.858611 | orchestrator | 2026-03-23 00:53:03 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:03.858623 | orchestrator | 2026-03-23 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:06.900152 | orchestrator | 2026-03-23 00:53:06 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:06.900652 | orchestrator | 2026-03-23 00:53:06 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:06.900784 | orchestrator | 2026-03-23 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:09.944193 | orchestrator | 2026-03-23 00:53:09 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:09.944992 | orchestrator | 2026-03-23 00:53:09 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:09.945028 | orchestrator | 2026-03-23 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:12.990470 | orchestrator | 2026-03-23 00:53:12 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:12.991056 | orchestrator | 2026-03-23 00:53:12 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:12.991107 | orchestrator | 2026-03-23 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:16.033502 | orchestrator | 2026-03-23 00:53:16 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:16.033892 | orchestrator | 2026-03-23 00:53:16 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:16.033921 | orchestrator | 2026-03-23 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:19.073462 | orchestrator | 2026-03-23 00:53:19 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:19.076242 | orchestrator | 2026-03-23 00:53:19 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:19.076307 | orchestrator | 2026-03-23 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:22.123498 | orchestrator | 2026-03-23 00:53:22 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:22.125424 | orchestrator | 2026-03-23 00:53:22 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:22.125882 | orchestrator | 2026-03-23 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:25.164333 | orchestrator | 2026-03-23 00:53:25 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:25.169690 | orchestrator | 2026-03-23 00:53:25 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:25.169770 | orchestrator | 2026-03-23 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:28.215058 | orchestrator | 2026-03-23 00:53:28 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:28.217827 | orchestrator | 2026-03-23 00:53:28 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:28.217881 | orchestrator | 2026-03-23 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:31.273390 | orchestrator | 2026-03-23 00:53:31 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:31.273478 | orchestrator | 2026-03-23 00:53:31 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:31.273489 | orchestrator | 2026-03-23 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:34.316261 | orchestrator | 2026-03-23 00:53:34 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:34.318209 | orchestrator | 2026-03-23 00:53:34 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:34.318280 | orchestrator | 2026-03-23 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:37.365198 | orchestrator | 2026-03-23 00:53:37 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:37.367874 | orchestrator | 2026-03-23 00:53:37 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:37.368017 | orchestrator | 2026-03-23 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:40.411213 | orchestrator | 2026-03-23 00:53:40 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:40.412753 | orchestrator | 2026-03-23 00:53:40 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:40.412836 | orchestrator | 2026-03-23 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:43.458198 | orchestrator | 2026-03-23 00:53:43 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:43.460382 | orchestrator | 2026-03-23 00:53:43 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:43.460501 | orchestrator | 2026-03-23 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:46.507163 | orchestrator | 2026-03-23 00:53:46 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:46.509938 | orchestrator | 2026-03-23 00:53:46 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:46.510007 | orchestrator | 2026-03-23 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:49.543499 | orchestrator | 2026-03-23 00:53:49 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:49.543591 | orchestrator | 2026-03-23 00:53:49 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:49.543618 | orchestrator | 2026-03-23 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:52.579073 | orchestrator | 2026-03-23 00:53:52 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:52.581179 | orchestrator | 2026-03-23 00:53:52 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:52.581263 | orchestrator | 2026-03-23 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:55.619741 | orchestrator | 2026-03-23 00:53:55 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:55.621136 | orchestrator | 2026-03-23 00:53:55 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:55.621212 | orchestrator | 2026-03-23 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:53:58.656353 | orchestrator | 2026-03-23 00:53:58 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:53:58.656529 | orchestrator | 2026-03-23 00:53:58 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:53:58.656547 | orchestrator | 2026-03-23 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:01.734884 | orchestrator | 2026-03-23 00:54:01 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:54:01.736018 | orchestrator | 2026-03-23 00:54:01 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:01.736114 | orchestrator | 2026-03-23 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:04.782800 | orchestrator | 2026-03-23 00:54:04 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:54:04.784417 | orchestrator | 2026-03-23 00:54:04 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:04.784496 | orchestrator | 2026-03-23 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:07.830051 | orchestrator | 2026-03-23 00:54:07 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:54:07.830665 | orchestrator | 2026-03-23 00:54:07 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:07.830694 | orchestrator | 2026-03-23 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:10.876588 | orchestrator | 2026-03-23 00:54:10 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state STARTED 2026-03-23 00:54:10.878083 | orchestrator | 2026-03-23 00:54:10 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:10.878440 | orchestrator | 2026-03-23 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:13.924565 | orchestrator | 2026-03-23 00:54:13 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:13.932319 | orchestrator | 2026-03-23 00:54:13 | INFO  | Task 71e50a7d-c467-4d20-a419-bb5c3534663a is in state SUCCESS 2026-03-23 00:54:13.934245 | orchestrator | 2026-03-23 00:54:13.934325 | orchestrator | 2026-03-23 00:54:13.934332 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 00:54:13.934337 | orchestrator | 2026-03-23 00:54:13.934341 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 00:54:13.934346 | orchestrator | Monday 23 March 2026 00:48:22 +0000 (0:00:00.385) 0:00:00.385 ********** 2026-03-23 00:54:13.934350 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.934355 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.934362 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.934369 | orchestrator | 2026-03-23 00:54:13.934399 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 00:54:13.934406 | orchestrator | Monday 23 March 2026 00:48:22 +0000 (0:00:00.314) 0:00:00.699 ********** 2026-03-23 00:54:13.934413 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-23 00:54:13.934433 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-23 00:54:13.934440 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-23 00:54:13.934446 | orchestrator | 2026-03-23 00:54:13.934452 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-23 00:54:13.934458 | orchestrator | 2026-03-23 00:54:13.934463 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-23 00:54:13.934523 | orchestrator | Monday 23 March 2026 00:48:23 +0000 (0:00:00.378) 0:00:01.078 ********** 2026-03-23 00:54:13.934554 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.934560 | orchestrator | 2026-03-23 00:54:13.934566 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-23 00:54:13.934572 | orchestrator | Monday 23 March 2026 00:48:23 +0000 (0:00:00.652) 0:00:01.730 ********** 2026-03-23 00:54:13.934579 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.934611 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.934630 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.934635 | orchestrator | 2026-03-23 00:54:13.934641 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-23 00:54:13.934647 | orchestrator | Monday 23 March 2026 00:48:25 +0000 (0:00:01.533) 0:00:03.263 ********** 2026-03-23 00:54:13.934653 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.934659 | orchestrator | 2026-03-23 00:54:13.934665 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-23 00:54:13.934671 | orchestrator | Monday 23 March 2026 00:48:26 +0000 (0:00:00.602) 0:00:03.866 ********** 2026-03-23 00:54:13.934677 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.934682 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.934688 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.934694 | orchestrator | 2026-03-23 00:54:13.934699 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-23 00:54:13.934705 | orchestrator | Monday 23 March 2026 00:48:27 +0000 (0:00:01.648) 0:00:05.515 ********** 2026-03-23 00:54:13.934711 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-23 00:54:13.934744 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-23 00:54:13.934750 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-23 00:54:13.934784 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-23 00:54:13.934793 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-23 00:54:13.934799 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-23 00:54:13.934807 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-23 00:54:13.934811 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-23 00:54:13.934814 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-23 00:54:13.934818 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-23 00:54:13.934822 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-23 00:54:13.934825 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-23 00:54:13.934829 | orchestrator | 2026-03-23 00:54:13.934833 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-23 00:54:13.934843 | orchestrator | Monday 23 March 2026 00:48:30 +0000 (0:00:03.011) 0:00:08.527 ********** 2026-03-23 00:54:13.934847 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-23 00:54:13.934851 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-23 00:54:13.934855 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-23 00:54:13.934859 | orchestrator | 2026-03-23 00:54:13.934863 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-23 00:54:13.934866 | orchestrator | Monday 23 March 2026 00:48:31 +0000 (0:00:00.795) 0:00:09.323 ********** 2026-03-23 00:54:13.934871 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-23 00:54:13.934875 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-23 00:54:13.934879 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-23 00:54:13.934883 | orchestrator | 2026-03-23 00:54:13.934886 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-23 00:54:13.934890 | orchestrator | Monday 23 March 2026 00:48:32 +0000 (0:00:01.242) 0:00:10.565 ********** 2026-03-23 00:54:13.934894 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-23 00:54:13.934898 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.934912 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-23 00:54:13.934916 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.934920 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-23 00:54:13.934924 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.934928 | orchestrator | 2026-03-23 00:54:13.934931 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-23 00:54:13.934935 | orchestrator | Monday 23 March 2026 00:48:33 +0000 (0:00:00.947) 0:00:11.513 ********** 2026-03-23 00:54:13.934945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.934954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.934958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.934962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.934971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.934977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.934982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-23 00:54:13.934989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-23 00:54:13.934993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-23 00:54:13.934997 | orchestrator | 2026-03-23 00:54:13.935001 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-23 00:54:13.935006 | orchestrator | Monday 23 March 2026 00:48:35 +0000 (0:00:01.777) 0:00:13.290 ********** 2026-03-23 00:54:13.935012 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.935018 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.935024 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.935030 | orchestrator | 2026-03-23 00:54:13.935035 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-23 00:54:13.935041 | orchestrator | Monday 23 March 2026 00:48:36 +0000 (0:00:01.120) 0:00:14.411 ********** 2026-03-23 00:54:13.935046 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-23 00:54:13.935057 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-23 00:54:13.935063 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-23 00:54:13.935070 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-23 00:54:13.935095 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-23 00:54:13.935100 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-23 00:54:13.935104 | orchestrator | 2026-03-23 00:54:13.935119 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-23 00:54:13.935123 | orchestrator | Monday 23 March 2026 00:48:38 +0000 (0:00:01.777) 0:00:16.188 ********** 2026-03-23 00:54:13.935127 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.935131 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.935135 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.935138 | orchestrator | 2026-03-23 00:54:13.935142 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-23 00:54:13.935146 | orchestrator | Monday 23 March 2026 00:48:39 +0000 (0:00:00.888) 0:00:17.076 ********** 2026-03-23 00:54:13.935150 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.935153 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.935157 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.935161 | orchestrator | 2026-03-23 00:54:13.935190 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-23 00:54:13.935195 | orchestrator | Monday 23 March 2026 00:48:41 +0000 (0:00:02.136) 0:00:19.213 ********** 2026-03-23 00:54:13.935199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.935209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.935217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.935222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__073248c5a237f8989a837c174063b7809be41ad5', '__omit_place_holder__073248c5a237f8989a837c174063b7809be41ad5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-23 00:54:13.935230 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.935234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.935238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.935242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.935246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__073248c5a237f8989a837c174063b7809be41ad5', '__omit_place_holder__073248c5a237f8989a837c174063b7809be41ad5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-23 00:54:13.935250 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.935310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.935318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.935326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.935330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__073248c5a237f8989a837c174063b7809be41ad5', '__omit_place_holder__073248c5a237f8989a837c174063b7809be41ad5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-23 00:54:13.935334 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.935338 | orchestrator | 2026-03-23 00:54:13.935341 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-23 00:54:13.935345 | orchestrator | Monday 23 March 2026 00:48:42 +0000 (0:00:01.367) 0:00:20.581 ********** 2026-03-23 00:54:13.935349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.935353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.935402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.935414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.935456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.935461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__073248c5a237f8989a837c174063b7809be41ad5', '__omit_place_holder__073248c5a237f8989a837c174063b7809be41ad5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-23 00:54:13.935465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.935469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.935473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.935480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__073248c5a237f8989a837c174063b7809be41ad5', '__omit_place_holder__073248c5a237f8989a837c174063b7809be41ad5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-23 00:54:13.935488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.935498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__073248c5a237f8989a837c174063b7809be41ad5', '__omit_place_holder__073248c5a237f8989a837c174063b7809be41ad5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-23 00:54:13.935502 | orchestrator | 2026-03-23 00:54:13.935506 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-23 00:54:13.935510 | orchestrator | Monday 23 March 2026 00:48:46 +0000 (0:00:03.290) 0:00:23.872 ********** 2026-03-23 00:54:13.935514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.935518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.935522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.935530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.935540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.935544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.935548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-23 00:54:13.935552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-23 00:54:13.935556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-23 00:54:13.935560 | orchestrator | 2026-03-23 00:54:13.935564 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-23 00:54:13.935568 | orchestrator | Monday 23 March 2026 00:48:49 +0000 (0:00:03.292) 0:00:27.164 ********** 2026-03-23 00:54:13.935572 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-23 00:54:13.935576 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-23 00:54:13.935580 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-23 00:54:13.935584 | orchestrator | 2026-03-23 00:54:13.935587 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-23 00:54:13.935591 | orchestrator | Monday 23 March 2026 00:48:51 +0000 (0:00:02.427) 0:00:29.591 ********** 2026-03-23 00:54:13.935595 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-23 00:54:13.935599 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-23 00:54:13.935605 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-23 00:54:13.935609 | orchestrator | 2026-03-23 00:54:13.936214 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-23 00:54:13.936231 | orchestrator | Monday 23 March 2026 00:48:56 +0000 (0:00:05.185) 0:00:34.776 ********** 2026-03-23 00:54:13.936235 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.936240 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.936243 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.936247 | orchestrator | 2026-03-23 00:54:13.936251 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-23 00:54:13.936300 | orchestrator | Monday 23 March 2026 00:48:58 +0000 (0:00:01.096) 0:00:35.873 ********** 2026-03-23 00:54:13.936304 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-23 00:54:13.936313 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-23 00:54:13.936317 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-23 00:54:13.936321 | orchestrator | 2026-03-23 00:54:13.936325 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-23 00:54:13.936328 | orchestrator | Monday 23 March 2026 00:49:00 +0000 (0:00:02.093) 0:00:37.966 ********** 2026-03-23 00:54:13.936332 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-23 00:54:13.936336 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-23 00:54:13.936340 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-23 00:54:13.936344 | orchestrator | 2026-03-23 00:54:13.936347 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-23 00:54:13.936351 | orchestrator | Monday 23 March 2026 00:49:02 +0000 (0:00:01.970) 0:00:39.937 ********** 2026-03-23 00:54:13.936355 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-23 00:54:13.936362 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-23 00:54:13.936367 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-23 00:54:13.936456 | orchestrator | 2026-03-23 00:54:13.936476 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-23 00:54:13.936480 | orchestrator | Monday 23 March 2026 00:49:03 +0000 (0:00:01.594) 0:00:41.532 ********** 2026-03-23 00:54:13.936484 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-23 00:54:13.936488 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-23 00:54:13.936492 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-23 00:54:13.936496 | orchestrator | 2026-03-23 00:54:13.936500 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-23 00:54:13.936504 | orchestrator | Monday 23 March 2026 00:49:05 +0000 (0:00:02.095) 0:00:43.628 ********** 2026-03-23 00:54:13.936507 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.936511 | orchestrator | 2026-03-23 00:54:13.936515 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-23 00:54:13.936519 | orchestrator | Monday 23 March 2026 00:49:06 +0000 (0:00:00.670) 0:00:44.299 ********** 2026-03-23 00:54:13.936524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.936557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.936569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.936576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.936581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.936585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.936589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-23 00:54:13.936597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-23 00:54:13.936601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-23 00:54:13.936605 | orchestrator | 2026-03-23 00:54:13.936609 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-23 00:54:13.936612 | orchestrator | Monday 23 March 2026 00:49:09 +0000 (0:00:03.402) 0:00:47.701 ********** 2026-03-23 00:54:13.936620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.936632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.936635 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.936639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.936666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.936670 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.936674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.936688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.936692 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.936696 | orchestrator | 2026-03-23 00:54:13.936700 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-23 00:54:13.936704 | orchestrator | Monday 23 March 2026 00:49:10 +0000 (0:00:00.652) 0:00:48.354 ********** 2026-03-23 00:54:13.936708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.936719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.936727 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.936734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.936741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.936745 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.936749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.936759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.936763 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.936767 | orchestrator | 2026-03-23 00:54:13.936771 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-23 00:54:13.936775 | orchestrator | Monday 23 March 2026 00:49:11 +0000 (0:00:00.935) 0:00:49.289 ********** 2026-03-23 00:54:13.936779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.936789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.936793 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.936800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.936812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.936820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.936824 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.936830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.936835 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.936839 | orchestrator | 2026-03-23 00:54:13.936844 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-23 00:54:13.936848 | orchestrator | Monday 23 March 2026 00:49:12 +0000 (0:00:00.841) 0:00:50.130 ********** 2026-03-23 00:54:13.936858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.936870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.936874 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.936878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.936887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.936891 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.936904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.936919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.936924 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.936928 | orchestrator | 2026-03-23 00:54:13.936933 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-23 00:54:13.936937 | orchestrator | Monday 23 March 2026 00:49:12 +0000 (0:00:00.501) 0:00:50.632 ********** 2026-03-23 00:54:13.936941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.936950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.936955 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.936962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.936967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.937033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.937038 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.937042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.937069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.937074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.937078 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.937082 | orchestrator | 2026-03-23 00:54:13.937086 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-23 00:54:13.937091 | orchestrator | Monday 23 March 2026 00:49:13 +0000 (0:00:01.040) 0:00:51.672 ********** 2026-03-23 00:54:13.937095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.937125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.937137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.937142 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.937148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.937155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.937162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.937168 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.937174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.937201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.937213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.937220 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.937226 | orchestrator | 2026-03-23 00:54:13.937237 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-23 00:54:13.937244 | orchestrator | Monday 23 March 2026 00:49:14 +0000 (0:00:00.494) 0:00:52.166 ********** 2026-03-23 00:54:13.937251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.937368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.937376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.937382 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.937387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.937391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.937407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.937411 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.937418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.937422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.937426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.937430 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.937434 | orchestrator | 2026-03-23 00:54:13.937437 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-23 00:54:13.937441 | orchestrator | Monday 23 March 2026 00:49:15 +0000 (0:00:00.647) 0:00:52.814 ********** 2026-03-23 00:54:13.937445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.937449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.937457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.937461 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.937471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.937475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.937479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.937483 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.937487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-23 00:54:13.937491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-23 00:54:13.937495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-23 00:54:13.937502 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.937506 | orchestrator | 2026-03-23 00:54:13.937510 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-23 00:54:13.937514 | orchestrator | Monday 23 March 2026 00:49:16 +0000 (0:00:01.741) 0:00:54.556 ********** 2026-03-23 00:54:13.937518 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-23 00:54:13.937522 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-23 00:54:13.937529 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-23 00:54:13.937533 | orchestrator | 2026-03-23 00:54:13.937536 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-23 00:54:13.937540 | orchestrator | Monday 23 March 2026 00:49:18 +0000 (0:00:01.770) 0:00:56.327 ********** 2026-03-23 00:54:13.937544 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-23 00:54:13.937548 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-23 00:54:13.937552 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-23 00:54:13.937556 | orchestrator | 2026-03-23 00:54:13.937564 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-23 00:54:13.937568 | orchestrator | Monday 23 March 2026 00:49:19 +0000 (0:00:01.339) 0:00:57.666 ********** 2026-03-23 00:54:13.937572 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-23 00:54:13.937575 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-23 00:54:13.937579 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-23 00:54:13.937583 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.937587 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-23 00:54:13.937591 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.937594 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-23 00:54:13.937598 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-23 00:54:13.937602 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.937614 | orchestrator | 2026-03-23 00:54:13.937618 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-23 00:54:13.937622 | orchestrator | Monday 23 March 2026 00:49:20 +0000 (0:00:01.047) 0:00:58.713 ********** 2026-03-23 00:54:13.937626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.937630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.937638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-23 00:54:13.937645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.937662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.937666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-23 00:54:13.937670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-23 00:54:13.937674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-23 00:54:13.937681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-23 00:54:13.937691 | orchestrator | 2026-03-23 00:54:13.937695 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-23 00:54:13.937699 | orchestrator | Monday 23 March 2026 00:49:23 +0000 (0:00:02.372) 0:01:01.085 ********** 2026-03-23 00:54:13.937703 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.937707 | orchestrator | 2026-03-23 00:54:13.937710 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-23 00:54:13.937714 | orchestrator | Monday 23 March 2026 00:49:23 +0000 (0:00:00.544) 0:01:01.629 ********** 2026-03-23 00:54:13.937719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-23 00:54:13.937726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.937733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.937737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.937741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-23 00:54:13.937760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.937764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-23 00:54:13.939395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.939400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939415 | orchestrator | 2026-03-23 00:54:13.939419 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-23 00:54:13.939424 | orchestrator | Monday 23 March 2026 00:49:27 +0000 (0:00:03.605) 0:01:05.235 ********** 2026-03-23 00:54:13.939428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-23 00:54:13.939442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.939451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939478 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.939484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-23 00:54:13.939491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.939496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939510 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.939524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-23 00:54:13.939530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.939539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939550 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.939556 | orchestrator | 2026-03-23 00:54:13.939562 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-23 00:54:13.939567 | orchestrator | Monday 23 March 2026 00:49:28 +0000 (0:00:00.726) 0:01:05.962 ********** 2026-03-23 00:54:13.939573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-23 00:54:13.939581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-23 00:54:13.939588 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.939594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-23 00:54:13.939599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-23 00:54:13.939604 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.939610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-23 00:54:13.939615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-23 00:54:13.939621 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.939630 | orchestrator | 2026-03-23 00:54:13.939640 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-23 00:54:13.939646 | orchestrator | Monday 23 March 2026 00:49:29 +0000 (0:00:01.007) 0:01:06.969 ********** 2026-03-23 00:54:13.939652 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.939657 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.939663 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.939791 | orchestrator | 2026-03-23 00:54:13.939800 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-23 00:54:13.939804 | orchestrator | Monday 23 March 2026 00:49:30 +0000 (0:00:01.504) 0:01:08.473 ********** 2026-03-23 00:54:13.939808 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.939816 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.939820 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.939824 | orchestrator | 2026-03-23 00:54:13.939831 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-23 00:54:13.939835 | orchestrator | Monday 23 March 2026 00:49:32 +0000 (0:00:01.801) 0:01:10.274 ********** 2026-03-23 00:54:13.939839 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.939842 | orchestrator | 2026-03-23 00:54:13.939846 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-23 00:54:13.939850 | orchestrator | Monday 23 March 2026 00:49:33 +0000 (0:00:00.549) 0:01:10.824 ********** 2026-03-23 00:54:13.939855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.939860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.939877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.939893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939905 | orchestrator | 2026-03-23 00:54:13.939909 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-23 00:54:13.939913 | orchestrator | Monday 23 March 2026 00:49:36 +0000 (0:00:03.563) 0:01:14.387 ********** 2026-03-23 00:54:13.939920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.939930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939938 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.939942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.939946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939957 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.939967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.939972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.939981 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.939985 | orchestrator | 2026-03-23 00:54:13.939989 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-23 00:54:13.939993 | orchestrator | Monday 23 March 2026 00:49:37 +0000 (0:00:01.018) 0:01:15.406 ********** 2026-03-23 00:54:13.939998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-23 00:54:13.940004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-23 00:54:13.940008 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.940013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-23 00:54:13.940017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-23 00:54:13.940022 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.940026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-23 00:54:13.940031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-23 00:54:13.940039 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.940044 | orchestrator | 2026-03-23 00:54:13.940048 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-23 00:54:13.940053 | orchestrator | Monday 23 March 2026 00:49:38 +0000 (0:00:00.771) 0:01:16.177 ********** 2026-03-23 00:54:13.940057 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.940061 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.940066 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.940070 | orchestrator | 2026-03-23 00:54:13.940074 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-23 00:54:13.940078 | orchestrator | Monday 23 March 2026 00:49:39 +0000 (0:00:01.375) 0:01:17.552 ********** 2026-03-23 00:54:13.940082 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.940085 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.940089 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.940093 | orchestrator | 2026-03-23 00:54:13.940099 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-23 00:54:13.940103 | orchestrator | Monday 23 March 2026 00:49:42 +0000 (0:00:02.247) 0:01:19.800 ********** 2026-03-23 00:54:13.940106 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.940110 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.940114 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.940118 | orchestrator | 2026-03-23 00:54:13.940121 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-23 00:54:13.940125 | orchestrator | Monday 23 March 2026 00:49:42 +0000 (0:00:00.240) 0:01:20.040 ********** 2026-03-23 00:54:13.940129 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.940132 | orchestrator | 2026-03-23 00:54:13.940136 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-23 00:54:13.940143 | orchestrator | Monday 23 March 2026 00:49:42 +0000 (0:00:00.733) 0:01:20.774 ********** 2026-03-23 00:54:13.940147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-23 00:54:13.940152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-23 00:54:13.940156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-23 00:54:13.940163 | orchestrator | 2026-03-23 00:54:13.940167 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-23 00:54:13.940170 | orchestrator | Monday 23 March 2026 00:49:45 +0000 (0:00:02.884) 0:01:23.659 ********** 2026-03-23 00:54:13.940177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-23 00:54:13.940181 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.940187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-23 00:54:13.940191 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.940195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-23 00:54:13.940199 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.940203 | orchestrator | 2026-03-23 00:54:13.940207 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-23 00:54:13.940210 | orchestrator | Monday 23 March 2026 00:49:47 +0000 (0:00:01.735) 0:01:25.395 ********** 2026-03-23 00:54:13.940215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-23 00:54:13.940225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-23 00:54:13.940230 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.940234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-23 00:54:13.940238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-23 00:54:13.940242 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.940248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-23 00:54:13.940252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-23 00:54:13.940256 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.940393 | orchestrator | 2026-03-23 00:54:13.940413 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-23 00:54:13.940417 | orchestrator | Monday 23 March 2026 00:49:49 +0000 (0:00:02.170) 0:01:27.566 ********** 2026-03-23 00:54:13.940421 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.940425 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.940429 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.940432 | orchestrator | 2026-03-23 00:54:13.940436 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-23 00:54:13.940440 | orchestrator | Monday 23 March 2026 00:49:50 +0000 (0:00:00.435) 0:01:28.001 ********** 2026-03-23 00:54:13.940443 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.940447 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.940451 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.940455 | orchestrator | 2026-03-23 00:54:13.940458 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-23 00:54:13.940462 | orchestrator | Monday 23 March 2026 00:49:51 +0000 (0:00:01.178) 0:01:29.180 ********** 2026-03-23 00:54:13.940466 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.940476 | orchestrator | 2026-03-23 00:54:13.940480 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-23 00:54:13.940484 | orchestrator | Monday 23 March 2026 00:49:52 +0000 (0:00:00.788) 0:01:29.969 ********** 2026-03-23 00:54:13.940488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.940493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.940523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.940531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940562 | orchestrator | 2026-03-23 00:54:13.940566 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-23 00:54:13.940570 | orchestrator | Monday 23 March 2026 00:49:55 +0000 (0:00:03.443) 0:01:33.413 ********** 2026-03-23 00:54:13.940574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.940578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940609 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.940613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.940617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940629 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.940639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.940646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940658 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.940662 | orchestrator | 2026-03-23 00:54:13.940665 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-23 00:54:13.940669 | orchestrator | Monday 23 March 2026 00:49:56 +0000 (0:00:00.800) 0:01:34.214 ********** 2026-03-23 00:54:13.940673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-23 00:54:13.940677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-23 00:54:13.940681 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.940685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-23 00:54:13.940689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-23 00:54:13.940693 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.940697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-23 00:54:13.940703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-23 00:54:13.940711 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.940715 | orchestrator | 2026-03-23 00:54:13.940718 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-23 00:54:13.940722 | orchestrator | Monday 23 March 2026 00:49:57 +0000 (0:00:01.098) 0:01:35.312 ********** 2026-03-23 00:54:13.940726 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.940730 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.940733 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.940737 | orchestrator | 2026-03-23 00:54:13.940743 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-23 00:54:13.940747 | orchestrator | Monday 23 March 2026 00:49:59 +0000 (0:00:01.532) 0:01:36.845 ********** 2026-03-23 00:54:13.940751 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.940754 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.940758 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.940762 | orchestrator | 2026-03-23 00:54:13.940766 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-23 00:54:13.940769 | orchestrator | Monday 23 March 2026 00:50:01 +0000 (0:00:02.117) 0:01:38.963 ********** 2026-03-23 00:54:13.940773 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.940777 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.940780 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.940784 | orchestrator | 2026-03-23 00:54:13.940788 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-23 00:54:13.940792 | orchestrator | Monday 23 March 2026 00:50:01 +0000 (0:00:00.274) 0:01:39.237 ********** 2026-03-23 00:54:13.940795 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.940799 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.940803 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.940806 | orchestrator | 2026-03-23 00:54:13.940810 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-23 00:54:13.940814 | orchestrator | Monday 23 March 2026 00:50:01 +0000 (0:00:00.274) 0:01:39.511 ********** 2026-03-23 00:54:13.940818 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.940821 | orchestrator | 2026-03-23 00:54:13.940825 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-23 00:54:13.940829 | orchestrator | Monday 23 March 2026 00:50:02 +0000 (0:00:00.955) 0:01:40.467 ********** 2026-03-23 00:54:13.940833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 00:54:13.940838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 00:54:13.940844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.940870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 00:54:13.940877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 00:54:13.941933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.941960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.941965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.941970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.941973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.941978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 00:54:13.941988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 00:54:13.941998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942057 | orchestrator | 2026-03-23 00:54:13.942061 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-23 00:54:13.942065 | orchestrator | Monday 23 March 2026 00:50:07 +0000 (0:00:04.852) 0:01:45.320 ********** 2026-03-23 00:54:13.942069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 00:54:13.942080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 00:54:13.942084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 00:54:13.942089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 00:54:13.942093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942128 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.942133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942152 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.942164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 00:54:13.942170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 00:54:13.942176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.942214 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.942219 | orchestrator | 2026-03-23 00:54:13.942225 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-23 00:54:13.942231 | orchestrator | Monday 23 March 2026 00:50:08 +0000 (0:00:01.048) 0:01:46.368 ********** 2026-03-23 00:54:13.942238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-23 00:54:13.942247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-23 00:54:13.942254 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.942285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-23 00:54:13.942291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-23 00:54:13.942297 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.942303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-23 00:54:13.942309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-23 00:54:13.942319 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.942324 | orchestrator | 2026-03-23 00:54:13.942330 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-23 00:54:13.942335 | orchestrator | Monday 23 March 2026 00:50:09 +0000 (0:00:01.239) 0:01:47.608 ********** 2026-03-23 00:54:13.942339 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.942343 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.942347 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.942351 | orchestrator | 2026-03-23 00:54:13.942354 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-23 00:54:13.942358 | orchestrator | Monday 23 March 2026 00:50:11 +0000 (0:00:01.326) 0:01:48.935 ********** 2026-03-23 00:54:13.942362 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.942366 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.942381 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.942385 | orchestrator | 2026-03-23 00:54:13.942389 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-23 00:54:13.942392 | orchestrator | Monday 23 March 2026 00:50:12 +0000 (0:00:01.728) 0:01:50.663 ********** 2026-03-23 00:54:13.942396 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.942400 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.942403 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.942407 | orchestrator | 2026-03-23 00:54:13.942411 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-23 00:54:13.942416 | orchestrator | Monday 23 March 2026 00:50:13 +0000 (0:00:00.221) 0:01:50.885 ********** 2026-03-23 00:54:13.942422 | orchestrator | included: glance for testbed-node-0, testbed-node-2, testbed-node-1 2026-03-23 00:54:13.942428 | orchestrator | 2026-03-23 00:54:13.942434 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-23 00:54:13.942441 | orchestrator | Monday 23 March 2026 00:50:14 +0000 (0:00:01.339) 0:01:52.224 ********** 2026-03-23 00:54:13.942456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 00:54:13.942469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-23 00:54:13.942480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 00:54:13.942494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 00:54:13.942502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-23 00:54:13.942512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-23 00:54:13.942520 | orchestrator | 2026-03-23 00:54:13.942526 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-23 00:54:13.942532 | orchestrator | Monday 23 March 2026 00:50:18 +0000 (0:00:04.297) 0:01:56.521 ********** 2026-03-23 00:54:13.942538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-23 00:54:13.942551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-23 00:54:13.942564 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.942570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-23 00:54:13.942578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-23 00:54:13.942587 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.942594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-23 00:54:13.942601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-23 00:54:13.942607 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.942611 | orchestrator | 2026-03-23 00:54:13.942616 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-23 00:54:13.942620 | orchestrator | Monday 23 March 2026 00:50:22 +0000 (0:00:03.737) 0:02:00.259 ********** 2026-03-23 00:54:13.942629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-23 00:54:13.942634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-23 00:54:13.942638 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.942643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-23 00:54:13.942647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-23 00:54:13.942652 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.942657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-23 00:54:13.942661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-23 00:54:13.942665 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.942670 | orchestrator | 2026-03-23 00:54:13.942674 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-23 00:54:13.942678 | orchestrator | Monday 23 March 2026 00:50:25 +0000 (0:00:03.033) 0:02:03.293 ********** 2026-03-23 00:54:13.942682 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.942686 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.942691 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.942695 | orchestrator | 2026-03-23 00:54:13.942699 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-23 00:54:13.942706 | orchestrator | Monday 23 March 2026 00:50:26 +0000 (0:00:01.332) 0:02:04.626 ********** 2026-03-23 00:54:13.942710 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.942715 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.942719 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.942723 | orchestrator | 2026-03-23 00:54:13.942728 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-23 00:54:13.942734 | orchestrator | Monday 23 March 2026 00:50:28 +0000 (0:00:01.815) 0:02:06.441 ********** 2026-03-23 00:54:13.942738 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.942743 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.942747 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.942751 | orchestrator | 2026-03-23 00:54:13.942755 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-23 00:54:13.942760 | orchestrator | Monday 23 March 2026 00:50:28 +0000 (0:00:00.258) 0:02:06.699 ********** 2026-03-23 00:54:13.942764 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.942768 | orchestrator | 2026-03-23 00:54:13.942773 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-23 00:54:13.942777 | orchestrator | Monday 23 March 2026 00:50:29 +0000 (0:00:00.871) 0:02:07.571 ********** 2026-03-23 00:54:13.942784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 00:54:13.942789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 00:54:13.942794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 00:54:13.942798 | orchestrator | 2026-03-23 00:54:13.942803 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-23 00:54:13.942807 | orchestrator | Monday 23 March 2026 00:50:32 +0000 (0:00:02.959) 0:02:10.530 ********** 2026-03-23 00:54:13.942811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-23 00:54:13.942818 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.942826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-23 00:54:13.942830 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.942837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-23 00:54:13.942842 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.942847 | orchestrator | 2026-03-23 00:54:13.942851 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-23 00:54:13.942856 | orchestrator | Monday 23 March 2026 00:50:33 +0000 (0:00:00.325) 0:02:10.856 ********** 2026-03-23 00:54:13.942860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-23 00:54:13.942864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-23 00:54:13.942869 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.942873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-23 00:54:13.942877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-23 00:54:13.942882 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.942886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-23 00:54:13.942890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-23 00:54:13.942895 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.942899 | orchestrator | 2026-03-23 00:54:13.942903 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-23 00:54:13.942908 | orchestrator | Monday 23 March 2026 00:50:33 +0000 (0:00:00.648) 0:02:11.504 ********** 2026-03-23 00:54:13.942912 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.942919 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.942923 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.942927 | orchestrator | 2026-03-23 00:54:13.942931 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-23 00:54:13.942935 | orchestrator | Monday 23 March 2026 00:50:35 +0000 (0:00:01.303) 0:02:12.807 ********** 2026-03-23 00:54:13.942939 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.942942 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.942946 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.942950 | orchestrator | 2026-03-23 00:54:13.942954 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-23 00:54:13.942957 | orchestrator | Monday 23 March 2026 00:50:36 +0000 (0:00:01.822) 0:02:14.629 ********** 2026-03-23 00:54:13.942961 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.942965 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.942968 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.942972 | orchestrator | 2026-03-23 00:54:13.942976 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-23 00:54:13.942980 | orchestrator | Monday 23 March 2026 00:50:37 +0000 (0:00:00.270) 0:02:14.900 ********** 2026-03-23 00:54:13.942983 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.942987 | orchestrator | 2026-03-23 00:54:13.942991 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-23 00:54:13.942994 | orchestrator | Monday 23 March 2026 00:50:38 +0000 (0:00:00.931) 0:02:15.831 ********** 2026-03-23 00:54:13.943006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-23 00:54:13.943011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-23 00:54:13.943025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-23 00:54:13.943032 | orchestrator | 2026-03-23 00:54:13.943036 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-23 00:54:13.943040 | orchestrator | Monday 23 March 2026 00:50:41 +0000 (0:00:03.422) 0:02:19.254 ********** 2026-03-23 00:54:13.943130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-23 00:54:13.943138 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.943145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-23 00:54:13.943153 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.943160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-23 00:54:13.943165 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.943169 | orchestrator | 2026-03-23 00:54:13.943172 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-23 00:54:13.943176 | orchestrator | Monday 23 March 2026 00:50:42 +0000 (0:00:00.637) 0:02:19.892 ********** 2026-03-23 00:54:13.943183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-23 00:54:13.943188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-23 00:54:13.943192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-23 00:54:13.943198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-23 00:54:13.943203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-23 00:54:13.943208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-23 00:54:13.943212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-23 00:54:13.943216 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.943220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-23 00:54:13.943224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-23 00:54:13.943228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-23 00:54:13.943232 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.943235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-23 00:54:13.943241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-23 00:54:13.943245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-23 00:54:13.943249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-23 00:54:13.943253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-23 00:54:13.943295 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.943302 | orchestrator | 2026-03-23 00:54:13.943309 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-23 00:54:13.943315 | orchestrator | Monday 23 March 2026 00:50:42 +0000 (0:00:00.779) 0:02:20.672 ********** 2026-03-23 00:54:13.943322 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.943327 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.943335 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.943339 | orchestrator | 2026-03-23 00:54:13.943343 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-23 00:54:13.943346 | orchestrator | Monday 23 March 2026 00:50:44 +0000 (0:00:01.316) 0:02:21.988 ********** 2026-03-23 00:54:13.943350 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.943354 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.943358 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.943361 | orchestrator | 2026-03-23 00:54:13.943365 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-23 00:54:13.943369 | orchestrator | Monday 23 March 2026 00:50:46 +0000 (0:00:01.831) 0:02:23.820 ********** 2026-03-23 00:54:13.943372 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.943376 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.943380 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.943384 | orchestrator | 2026-03-23 00:54:13.943387 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-23 00:54:13.943391 | orchestrator | Monday 23 March 2026 00:50:46 +0000 (0:00:00.267) 0:02:24.088 ********** 2026-03-23 00:54:13.943395 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.943398 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.943402 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.943406 | orchestrator | 2026-03-23 00:54:13.943410 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-23 00:54:13.943413 | orchestrator | Monday 23 March 2026 00:50:46 +0000 (0:00:00.252) 0:02:24.340 ********** 2026-03-23 00:54:13.943417 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.943421 | orchestrator | 2026-03-23 00:54:13.943424 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-23 00:54:13.943428 | orchestrator | Monday 23 March 2026 00:50:47 +0000 (0:00:00.961) 0:02:25.302 ********** 2026-03-23 00:54:13.943433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:54:13.943440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:54:13.943451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:54:13.943455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:54:13.943460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:54:13.943464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:54:13.943468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:54:13.943478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:54:13.943484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:54:13.943488 | orchestrator | 2026-03-23 00:54:13.943492 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-23 00:54:13.943496 | orchestrator | Monday 23 March 2026 00:50:50 +0000 (0:00:03.447) 0:02:28.750 ********** 2026-03-23 00:54:13.943500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-23 00:54:13.943504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:54:13.943508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:54:13.943512 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.943518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-23 00:54:13.943528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:54:13.943533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:54:13.943537 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.943541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-23 00:54:13.943545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:54:13.943549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:54:13.943556 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.943559 | orchestrator | 2026-03-23 00:54:13.943563 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-23 00:54:13.943569 | orchestrator | Monday 23 March 2026 00:50:51 +0000 (0:00:00.560) 0:02:29.310 ********** 2026-03-23 00:54:13.943573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-23 00:54:13.943578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-23 00:54:13.943584 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.943588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-23 00:54:13.943592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-23 00:54:13.943596 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.943600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-23 00:54:13.943604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-23 00:54:13.943607 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.943611 | orchestrator | 2026-03-23 00:54:13.943615 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-23 00:54:13.943619 | orchestrator | Monday 23 March 2026 00:50:52 +0000 (0:00:00.870) 0:02:30.180 ********** 2026-03-23 00:54:13.943622 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.943626 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.943630 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.943633 | orchestrator | 2026-03-23 00:54:13.943637 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-23 00:54:13.943641 | orchestrator | Monday 23 March 2026 00:50:53 +0000 (0:00:01.328) 0:02:31.508 ********** 2026-03-23 00:54:13.943645 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.943648 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.943652 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.943656 | orchestrator | 2026-03-23 00:54:13.943660 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-23 00:54:13.943663 | orchestrator | Monday 23 March 2026 00:50:55 +0000 (0:00:01.753) 0:02:33.262 ********** 2026-03-23 00:54:13.943667 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.943688 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.943692 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.943700 | orchestrator | 2026-03-23 00:54:13.943704 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-23 00:54:13.943708 | orchestrator | Monday 23 March 2026 00:50:55 +0000 (0:00:00.306) 0:02:33.569 ********** 2026-03-23 00:54:13.943711 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.943715 | orchestrator | 2026-03-23 00:54:13.943719 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-23 00:54:13.943722 | orchestrator | Monday 23 March 2026 00:50:56 +0000 (0:00:01.064) 0:02:34.633 ********** 2026-03-23 00:54:13.943727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 00:54:13.943734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.943741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 00:54:13.943745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.943750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 00:54:13.943759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.943763 | orchestrator | 2026-03-23 00:54:13.943767 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-23 00:54:13.943772 | orchestrator | Monday 23 March 2026 00:51:00 +0000 (0:00:03.267) 0:02:37.901 ********** 2026-03-23 00:54:13.943781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-23 00:54:13.943786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.943790 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.943795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-23 00:54:13.943802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.943807 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.943813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-23 00:54:13.943820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.943824 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.943829 | orchestrator | 2026-03-23 00:54:13.943833 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-23 00:54:13.943837 | orchestrator | Monday 23 March 2026 00:51:00 +0000 (0:00:00.642) 0:02:38.543 ********** 2026-03-23 00:54:13.943841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-23 00:54:13.943846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-23 00:54:13.943851 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.943855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-23 00:54:13.943860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-23 00:54:13.943866 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.943871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-23 00:54:13.943875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-23 00:54:13.943879 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.943884 | orchestrator | 2026-03-23 00:54:13.943888 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-23 00:54:13.943892 | orchestrator | Monday 23 March 2026 00:51:01 +0000 (0:00:00.904) 0:02:39.448 ********** 2026-03-23 00:54:13.943897 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.943901 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.943905 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.943909 | orchestrator | 2026-03-23 00:54:13.943913 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-23 00:54:13.943918 | orchestrator | Monday 23 March 2026 00:51:03 +0000 (0:00:01.340) 0:02:40.788 ********** 2026-03-23 00:54:13.943922 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.943926 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.943930 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.943935 | orchestrator | 2026-03-23 00:54:13.943939 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-23 00:54:13.943943 | orchestrator | Monday 23 March 2026 00:51:04 +0000 (0:00:01.898) 0:02:42.687 ********** 2026-03-23 00:54:13.943948 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.943952 | orchestrator | 2026-03-23 00:54:13.943956 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-23 00:54:13.943960 | orchestrator | Monday 23 March 2026 00:51:05 +0000 (0:00:00.902) 0:02:43.590 ********** 2026-03-23 00:54:13.943965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-23 00:54:13.943974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-23 00:54:13.943979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.943986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.943991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.943995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-23 00:54:13.944020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944034 | orchestrator | 2026-03-23 00:54:13.944038 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-23 00:54:13.944042 | orchestrator | Monday 23 March 2026 00:51:09 +0000 (0:00:03.416) 0:02:47.006 ********** 2026-03-23 00:54:13.944049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-23 00:54:13.944056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944072 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.944077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-23 00:54:13.944082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944101 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.944107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-23 00:54:13.944112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944124 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.944128 | orchestrator | 2026-03-23 00:54:13.944132 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-23 00:54:13.944135 | orchestrator | Monday 23 March 2026 00:51:09 +0000 (0:00:00.587) 0:02:47.594 ********** 2026-03-23 00:54:13.944139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-23 00:54:13.944143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-23 00:54:13.944147 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.944151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-23 00:54:13.944159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-23 00:54:13.944163 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.944167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-23 00:54:13.944171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-23 00:54:13.944176 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.944180 | orchestrator | 2026-03-23 00:54:13.944184 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-23 00:54:13.944188 | orchestrator | Monday 23 March 2026 00:51:10 +0000 (0:00:00.913) 0:02:48.507 ********** 2026-03-23 00:54:13.944192 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.944195 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.944199 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.944203 | orchestrator | 2026-03-23 00:54:13.944206 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-23 00:54:13.944210 | orchestrator | Monday 23 March 2026 00:51:12 +0000 (0:00:01.327) 0:02:49.835 ********** 2026-03-23 00:54:13.944214 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.944218 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.944221 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.944225 | orchestrator | 2026-03-23 00:54:13.944229 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-23 00:54:13.944233 | orchestrator | Monday 23 March 2026 00:51:13 +0000 (0:00:01.945) 0:02:51.781 ********** 2026-03-23 00:54:13.944236 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.944240 | orchestrator | 2026-03-23 00:54:13.944244 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-23 00:54:13.944248 | orchestrator | Monday 23 March 2026 00:51:15 +0000 (0:00:01.297) 0:02:53.078 ********** 2026-03-23 00:54:13.944251 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-23 00:54:13.944255 | orchestrator | 2026-03-23 00:54:13.944297 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-23 00:54:13.944303 | orchestrator | Monday 23 March 2026 00:51:18 +0000 (0:00:03.278) 0:02:56.357 ********** 2026-03-23 00:54:13.944309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:54:13.944323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-23 00:54:13.944330 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.944340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:54:13.944347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-23 00:54:13.944354 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.944430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:54:13.944443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-23 00:54:13.944447 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.944451 | orchestrator | 2026-03-23 00:54:13.944455 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-23 00:54:13.944458 | orchestrator | Monday 23 March 2026 00:51:20 +0000 (0:00:02.258) 0:02:58.615 ********** 2026-03-23 00:54:13.944463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:54:13.944469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-23 00:54:13.944473 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.944484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:54:13.944488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-23 00:54:13.944492 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.944496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:54:13.944505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-23 00:54:13.944510 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.944513 | orchestrator | 2026-03-23 00:54:13.944517 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-23 00:54:13.944521 | orchestrator | Monday 23 March 2026 00:51:22 +0000 (0:00:02.106) 0:03:00.722 ********** 2026-03-23 00:54:13.944527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-23 00:54:13.944532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-23 00:54:13.944536 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.944539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-23 00:54:13.944546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-23 00:54:13.944550 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.944554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-23 00:54:13.944560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-23 00:54:13.944564 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.944568 | orchestrator | 2026-03-23 00:54:13.944572 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-23 00:54:13.944575 | orchestrator | Monday 23 March 2026 00:51:25 +0000 (0:00:02.076) 0:03:02.798 ********** 2026-03-23 00:54:13.944579 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.944583 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.944587 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.944590 | orchestrator | 2026-03-23 00:54:13.944594 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-23 00:54:13.944598 | orchestrator | Monday 23 March 2026 00:51:26 +0000 (0:00:01.927) 0:03:04.726 ********** 2026-03-23 00:54:13.944602 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.944608 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.944612 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.944615 | orchestrator | 2026-03-23 00:54:13.944619 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-23 00:54:13.944623 | orchestrator | Monday 23 March 2026 00:51:28 +0000 (0:00:01.578) 0:03:06.304 ********** 2026-03-23 00:54:13.944627 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.944631 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.944634 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.944638 | orchestrator | 2026-03-23 00:54:13.944642 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-23 00:54:13.944645 | orchestrator | Monday 23 March 2026 00:51:28 +0000 (0:00:00.256) 0:03:06.561 ********** 2026-03-23 00:54:13.944649 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.944653 | orchestrator | 2026-03-23 00:54:13.944657 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-23 00:54:13.944661 | orchestrator | Monday 23 March 2026 00:51:29 +0000 (0:00:01.124) 0:03:07.685 ********** 2026-03-23 00:54:13.944665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-23 00:54:13.944673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-23 00:54:13.944677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-23 00:54:13.944681 | orchestrator | 2026-03-23 00:54:13.944685 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-23 00:54:13.944688 | orchestrator | Monday 23 March 2026 00:51:31 +0000 (0:00:01.469) 0:03:09.155 ********** 2026-03-23 00:54:13.944695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-23 00:54:13.944699 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.944705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-23 00:54:13.944712 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.944716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-23 00:54:13.944720 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.944724 | orchestrator | 2026-03-23 00:54:13.944728 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-23 00:54:13.944731 | orchestrator | Monday 23 March 2026 00:51:31 +0000 (0:00:00.301) 0:03:09.457 ********** 2026-03-23 00:54:13.944735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-23 00:54:13.944739 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.944743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-23 00:54:13.944747 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.944751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-23 00:54:13.944755 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.944759 | orchestrator | 2026-03-23 00:54:13.944762 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-23 00:54:13.944766 | orchestrator | Monday 23 March 2026 00:51:32 +0000 (0:00:00.730) 0:03:10.188 ********** 2026-03-23 00:54:13.944770 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.944774 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.944777 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.944781 | orchestrator | 2026-03-23 00:54:13.944785 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-23 00:54:13.944789 | orchestrator | Monday 23 March 2026 00:51:32 +0000 (0:00:00.349) 0:03:10.537 ********** 2026-03-23 00:54:13.944792 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.944796 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.944800 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.944803 | orchestrator | 2026-03-23 00:54:13.944807 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-23 00:54:13.944811 | orchestrator | Monday 23 March 2026 00:51:33 +0000 (0:00:01.076) 0:03:11.614 ********** 2026-03-23 00:54:13.944815 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.944819 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.944822 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.944826 | orchestrator | 2026-03-23 00:54:13.944830 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-23 00:54:13.944835 | orchestrator | Monday 23 March 2026 00:51:34 +0000 (0:00:00.303) 0:03:11.917 ********** 2026-03-23 00:54:13.944839 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.944843 | orchestrator | 2026-03-23 00:54:13.944846 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-23 00:54:13.944853 | orchestrator | Monday 23 March 2026 00:51:35 +0000 (0:00:01.403) 0:03:13.321 ********** 2026-03-23 00:54:13.944859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 00:54:13.944864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 00:54:13.944872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-23 00:54:13.944896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.944918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-23 00:54:13.944923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.944927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 00:54:13.944941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.944949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 00:54:13.944953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.944961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-23 00:54:13.944969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.944988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 00:54:13.944992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.944999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-23 00:54:13.945013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-23 00:54:13.945017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-23 00:54:13.945025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-23 00:54:13.945067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-23 00:54:13.945077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 00:54:13.945094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-23 00:54:13.945121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-23 00:54:13.945126 | orchestrator | 2026-03-23 00:54:13.945130 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-23 00:54:13.945135 | orchestrator | Monday 23 March 2026 00:51:39 +0000 (0:00:04.272) 0:03:17.593 ********** 2026-03-23 00:54:13.945141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 00:54:13.945146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-23 00:54:13.945171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 00:54:13.945192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 00:54:13.945211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-23 00:54:13.945357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-23 00:54:13.945377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-23 00:54:13.945388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945393 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.945399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 00:54:13.945407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-23 00:54:13.945436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 00:54:13.945443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-23 00:54:13.945447 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.945451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-23 00:54:13.945472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 00:54:13.945501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-23 00:54:13.945516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-23 00:54:13.945526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-23 00:54:13.945530 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.945534 | orchestrator | 2026-03-23 00:54:13.945537 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-23 00:54:13.945541 | orchestrator | Monday 23 March 2026 00:51:41 +0000 (0:00:02.146) 0:03:19.740 ********** 2026-03-23 00:54:13.945545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-23 00:54:13.945552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-23 00:54:13.945556 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.945559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-23 00:54:13.945563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-23 00:54:13.945570 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.945574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-23 00:54:13.945578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-23 00:54:13.945582 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.945585 | orchestrator | 2026-03-23 00:54:13.945589 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-23 00:54:13.945593 | orchestrator | Monday 23 March 2026 00:51:43 +0000 (0:00:01.441) 0:03:21.181 ********** 2026-03-23 00:54:13.945596 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.945600 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.945604 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.945608 | orchestrator | 2026-03-23 00:54:13.945611 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-23 00:54:13.945615 | orchestrator | Monday 23 March 2026 00:51:44 +0000 (0:00:01.515) 0:03:22.697 ********** 2026-03-23 00:54:13.945619 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.945623 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.945626 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.945630 | orchestrator | 2026-03-23 00:54:13.945634 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-23 00:54:13.945637 | orchestrator | Monday 23 March 2026 00:51:47 +0000 (0:00:02.310) 0:03:25.007 ********** 2026-03-23 00:54:13.945641 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.945645 | orchestrator | 2026-03-23 00:54:13.945649 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-23 00:54:13.945652 | orchestrator | Monday 23 March 2026 00:51:48 +0000 (0:00:01.382) 0:03:26.390 ********** 2026-03-23 00:54:13.945656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.945662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.945671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.945676 | orchestrator | 2026-03-23 00:54:13.945679 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-23 00:54:13.945683 | orchestrator | Monday 23 March 2026 00:51:51 +0000 (0:00:03.121) 0:03:29.511 ********** 2026-03-23 00:54:13.945687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.945691 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.945695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.945699 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.945705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.945712 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.945716 | orchestrator | 2026-03-23 00:54:13.945720 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-23 00:54:13.945724 | orchestrator | Monday 23 March 2026 00:51:52 +0000 (0:00:00.496) 0:03:30.008 ********** 2026-03-23 00:54:13.945742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-23 00:54:13.945748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-23 00:54:13.945753 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.945757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-23 00:54:13.945761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-23 00:54:13.945765 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.945768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-23 00:54:13.945772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-23 00:54:13.945776 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.945780 | orchestrator | 2026-03-23 00:54:13.945783 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-23 00:54:13.945787 | orchestrator | Monday 23 March 2026 00:51:53 +0000 (0:00:01.284) 0:03:31.293 ********** 2026-03-23 00:54:13.945791 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.945795 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.945798 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.945802 | orchestrator | 2026-03-23 00:54:13.945806 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-23 00:54:13.945810 | orchestrator | Monday 23 March 2026 00:51:54 +0000 (0:00:01.147) 0:03:32.440 ********** 2026-03-23 00:54:13.945813 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.945817 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.945821 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.945825 | orchestrator | 2026-03-23 00:54:13.945828 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-23 00:54:13.945832 | orchestrator | Monday 23 March 2026 00:51:56 +0000 (0:00:01.858) 0:03:34.299 ********** 2026-03-23 00:54:13.945836 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.945840 | orchestrator | 2026-03-23 00:54:13.945844 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-23 00:54:13.945847 | orchestrator | Monday 23 March 2026 00:51:57 +0000 (0:00:01.251) 0:03:35.551 ********** 2026-03-23 00:54:13.945852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.945864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.945875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.945900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945908 | orchestrator | 2026-03-23 00:54:13.945912 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-23 00:54:13.945916 | orchestrator | Monday 23 March 2026 00:52:01 +0000 (0:00:03.724) 0:03:39.275 ********** 2026-03-23 00:54:13.945920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.945928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945938 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.945945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.945949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945960 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.945964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.945973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.945984 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.945988 | orchestrator | 2026-03-23 00:54:13.945992 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-23 00:54:13.945997 | orchestrator | Monday 23 March 2026 00:52:02 +0000 (0:00:00.592) 0:03:39.868 ********** 2026-03-23 00:54:13.946001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-23 00:54:13.946006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-23 00:54:13.946010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-23 00:54:13.946320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-23 00:54:13.946336 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.946344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-23 00:54:13.946378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-23 00:54:13.946387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-23 00:54:13.946394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-23 00:54:13.946401 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.946408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-23 00:54:13.946415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-23 00:54:13.946422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-23 00:54:13.946429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-23 00:54:13.946436 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.946443 | orchestrator | 2026-03-23 00:54:13.946451 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-23 00:54:13.946488 | orchestrator | Monday 23 March 2026 00:52:02 +0000 (0:00:00.773) 0:03:40.642 ********** 2026-03-23 00:54:13.946497 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.946504 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.946511 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.946518 | orchestrator | 2026-03-23 00:54:13.946526 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-23 00:54:13.946533 | orchestrator | Monday 23 March 2026 00:52:04 +0000 (0:00:01.571) 0:03:42.213 ********** 2026-03-23 00:54:13.946541 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.946548 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.946555 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.946562 | orchestrator | 2026-03-23 00:54:13.946569 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-23 00:54:13.946586 | orchestrator | Monday 23 March 2026 00:52:06 +0000 (0:00:01.844) 0:03:44.057 ********** 2026-03-23 00:54:13.946595 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.946602 | orchestrator | 2026-03-23 00:54:13.946609 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-23 00:54:13.946616 | orchestrator | Monday 23 March 2026 00:52:07 +0000 (0:00:01.158) 0:03:45.216 ********** 2026-03-23 00:54:13.946625 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-23 00:54:13.946633 | orchestrator | 2026-03-23 00:54:13.946641 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-23 00:54:13.946648 | orchestrator | Monday 23 March 2026 00:52:08 +0000 (0:00:01.117) 0:03:46.334 ********** 2026-03-23 00:54:13.946658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-23 00:54:13.946674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-23 00:54:13.946681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-23 00:54:13.946689 | orchestrator | 2026-03-23 00:54:13.946696 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-23 00:54:13.946705 | orchestrator | Monday 23 March 2026 00:52:12 +0000 (0:00:03.574) 0:03:49.908 ********** 2026-03-23 00:54:13.946713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-23 00:54:13.946721 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.946728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-23 00:54:13.946736 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.946757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-23 00:54:13.946765 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.946772 | orchestrator | 2026-03-23 00:54:13.946780 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-23 00:54:13.946791 | orchestrator | Monday 23 March 2026 00:52:13 +0000 (0:00:01.218) 0:03:51.127 ********** 2026-03-23 00:54:13.946799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-23 00:54:13.946813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-23 00:54:13.946822 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.946829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-23 00:54:13.946836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-23 00:54:13.946844 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.946851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-23 00:54:13.946858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-23 00:54:13.946865 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.946873 | orchestrator | 2026-03-23 00:54:13.946880 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-23 00:54:13.946887 | orchestrator | Monday 23 March 2026 00:52:15 +0000 (0:00:01.822) 0:03:52.949 ********** 2026-03-23 00:54:13.946894 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.946902 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.946909 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.946917 | orchestrator | 2026-03-23 00:54:13.946925 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-23 00:54:13.946932 | orchestrator | Monday 23 March 2026 00:52:17 +0000 (0:00:02.473) 0:03:55.423 ********** 2026-03-23 00:54:13.946939 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.946946 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.946954 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.946962 | orchestrator | 2026-03-23 00:54:13.946969 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-23 00:54:13.946976 | orchestrator | Monday 23 March 2026 00:52:20 +0000 (0:00:02.755) 0:03:58.179 ********** 2026-03-23 00:54:13.946984 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-23 00:54:13.946993 | orchestrator | 2026-03-23 00:54:13.947001 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-23 00:54:13.947007 | orchestrator | Monday 23 March 2026 00:52:21 +0000 (0:00:00.725) 0:03:58.904 ********** 2026-03-23 00:54:13.947016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-23 00:54:13.947024 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.947044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-23 00:54:13.947057 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.947069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-23 00:54:13.947077 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.947084 | orchestrator | 2026-03-23 00:54:13.947092 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-23 00:54:13.947100 | orchestrator | Monday 23 March 2026 00:52:22 +0000 (0:00:01.087) 0:03:59.992 ********** 2026-03-23 00:54:13.947108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-23 00:54:13.947116 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.947123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-23 00:54:13.947131 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.947138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-23 00:54:13.947146 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.947154 | orchestrator | 2026-03-23 00:54:13.947161 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-23 00:54:13.947169 | orchestrator | Monday 23 March 2026 00:52:23 +0000 (0:00:01.287) 0:04:01.280 ********** 2026-03-23 00:54:13.947176 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.947184 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.947190 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.947198 | orchestrator | 2026-03-23 00:54:13.947205 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-23 00:54:13.947212 | orchestrator | Monday 23 March 2026 00:52:24 +0000 (0:00:01.047) 0:04:02.327 ********** 2026-03-23 00:54:13.947218 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.947225 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.947231 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.947238 | orchestrator | 2026-03-23 00:54:13.947245 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-23 00:54:13.947285 | orchestrator | Monday 23 March 2026 00:52:26 +0000 (0:00:02.192) 0:04:04.520 ********** 2026-03-23 00:54:13.947304 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.947320 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.947327 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.947335 | orchestrator | 2026-03-23 00:54:13.947342 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-23 00:54:13.947349 | orchestrator | Monday 23 March 2026 00:52:29 +0000 (0:00:02.555) 0:04:07.075 ********** 2026-03-23 00:54:13.947357 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-23 00:54:13.947364 | orchestrator | 2026-03-23 00:54:13.947371 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-23 00:54:13.947378 | orchestrator | Monday 23 March 2026 00:52:30 +0000 (0:00:00.779) 0:04:07.855 ********** 2026-03-23 00:54:13.947400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-23 00:54:13.947409 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.947422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-23 00:54:13.947430 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.947437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-23 00:54:13.947445 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.947452 | orchestrator | 2026-03-23 00:54:13.947459 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-23 00:54:13.947467 | orchestrator | Monday 23 March 2026 00:52:31 +0000 (0:00:01.383) 0:04:09.239 ********** 2026-03-23 00:54:13.947474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-23 00:54:13.947481 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.947488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-23 00:54:13.947503 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.947510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-23 00:54:13.947518 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.947524 | orchestrator | 2026-03-23 00:54:13.947531 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-23 00:54:13.947538 | orchestrator | Monday 23 March 2026 00:52:32 +0000 (0:00:01.246) 0:04:10.485 ********** 2026-03-23 00:54:13.947545 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.947552 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.947559 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.947566 | orchestrator | 2026-03-23 00:54:13.947574 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-23 00:54:13.947581 | orchestrator | Monday 23 March 2026 00:52:34 +0000 (0:00:01.421) 0:04:11.906 ********** 2026-03-23 00:54:13.947589 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.947609 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.947616 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.947623 | orchestrator | 2026-03-23 00:54:13.947630 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-23 00:54:13.947637 | orchestrator | Monday 23 March 2026 00:52:36 +0000 (0:00:02.685) 0:04:14.592 ********** 2026-03-23 00:54:13.947644 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.947652 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.947659 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.947666 | orchestrator | 2026-03-23 00:54:13.947674 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-23 00:54:13.947681 | orchestrator | Monday 23 March 2026 00:52:39 +0000 (0:00:03.067) 0:04:17.659 ********** 2026-03-23 00:54:13.947688 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.947695 | orchestrator | 2026-03-23 00:54:13.947707 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-23 00:54:13.947714 | orchestrator | Monday 23 March 2026 00:52:41 +0000 (0:00:01.231) 0:04:18.890 ********** 2026-03-23 00:54:13.947722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.947730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 00:54:13.947744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.947752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.947760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.947786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.947795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 00:54:13.947803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.947816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.947824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.947833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.947847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 00:54:13.947862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.947870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.947885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.947892 | orchestrator | 2026-03-23 00:54:13.947901 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-23 00:54:13.947909 | orchestrator | Monday 23 March 2026 00:52:44 +0000 (0:00:03.535) 0:04:22.426 ********** 2026-03-23 00:54:13.947917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.947924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 00:54:13.947970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.947982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.947990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.948003 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.948012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.948019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 00:54:13.948027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.948045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.948056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.948064 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.948072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.948084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 00:54:13.948092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.948099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 00:54:13.948107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 00:54:13.948126 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.948133 | orchestrator | 2026-03-23 00:54:13.948141 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-23 00:54:13.948148 | orchestrator | Monday 23 March 2026 00:52:45 +0000 (0:00:01.035) 0:04:23.462 ********** 2026-03-23 00:54:13.948156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-23 00:54:13.948168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-23 00:54:13.948182 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.948190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-23 00:54:13.948197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-23 00:54:13.948204 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.948210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-23 00:54:13.948216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-23 00:54:13.948223 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.948229 | orchestrator | 2026-03-23 00:54:13.948236 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-23 00:54:13.948242 | orchestrator | Monday 23 March 2026 00:52:46 +0000 (0:00:00.878) 0:04:24.340 ********** 2026-03-23 00:54:13.948250 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.948258 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.948281 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.948287 | orchestrator | 2026-03-23 00:54:13.948294 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-23 00:54:13.948302 | orchestrator | Monday 23 March 2026 00:52:47 +0000 (0:00:01.361) 0:04:25.702 ********** 2026-03-23 00:54:13.948308 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.948316 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.948322 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.948329 | orchestrator | 2026-03-23 00:54:13.948337 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-23 00:54:13.948344 | orchestrator | Monday 23 March 2026 00:52:50 +0000 (0:00:02.205) 0:04:27.907 ********** 2026-03-23 00:54:13.948351 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.948357 | orchestrator | 2026-03-23 00:54:13.948365 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-23 00:54:13.948372 | orchestrator | Monday 23 March 2026 00:52:51 +0000 (0:00:01.564) 0:04:29.472 ********** 2026-03-23 00:54:13.948380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:54:13.948401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:54:13.948416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:54:13.948425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:54:13.948434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:54:13.948519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:54:13.948541 | orchestrator | 2026-03-23 00:54:13.948549 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-23 00:54:13.948556 | orchestrator | Monday 23 March 2026 00:52:56 +0000 (0:00:04.954) 0:04:34.427 ********** 2026-03-23 00:54:13.948567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-23 00:54:13.948575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-23 00:54:13.948583 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.948591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-23 00:54:13.948598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-23 00:54:13.948611 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.948637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-23 00:54:13.948647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-23 00:54:13.948655 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.948664 | orchestrator | 2026-03-23 00:54:13.948671 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-23 00:54:13.948679 | orchestrator | Monday 23 March 2026 00:52:57 +0000 (0:00:00.765) 0:04:35.192 ********** 2026-03-23 00:54:13.948687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-23 00:54:13.948695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-23 00:54:13.948704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-23 00:54:13.948711 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.948719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-23 00:54:13.948726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-23 00:54:13.948740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-23 00:54:13.948748 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.948755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-23 00:54:13.948762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-23 00:54:13.948781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-23 00:54:13.948789 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.948797 | orchestrator | 2026-03-23 00:54:13.948805 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-23 00:54:13.948812 | orchestrator | Monday 23 March 2026 00:52:58 +0000 (0:00:01.059) 0:04:36.252 ********** 2026-03-23 00:54:13.948819 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.948826 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.948833 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.948841 | orchestrator | 2026-03-23 00:54:13.948848 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-23 00:54:13.948859 | orchestrator | Monday 23 March 2026 00:52:58 +0000 (0:00:00.365) 0:04:36.617 ********** 2026-03-23 00:54:13.948867 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.948876 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.948883 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.948890 | orchestrator | 2026-03-23 00:54:13.948897 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-23 00:54:13.948905 | orchestrator | Monday 23 March 2026 00:52:59 +0000 (0:00:01.110) 0:04:37.727 ********** 2026-03-23 00:54:13.948911 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.948919 | orchestrator | 2026-03-23 00:54:13.948926 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-23 00:54:13.948933 | orchestrator | Monday 23 March 2026 00:53:01 +0000 (0:00:01.583) 0:04:39.311 ********** 2026-03-23 00:54:13.948940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-23 00:54:13.948948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 00:54:13.948964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.948972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.948981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 00:54:13.949004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-23 00:54:13.949012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 00:54:13.949020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 00:54:13.949046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-23 00:54:13.949054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 00:54:13.949072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 00:54:13.949098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-23 00:54:13.949112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-23 00:54:13.949119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-23 00:54:13.949150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-23 00:54:13.949158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-23 00:54:13.949170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-23 00:54:13.949203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-23 00:54:13.949212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-23 00:54:13.949223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-23 00:54:13.949245 | orchestrator | 2026-03-23 00:54:13.949253 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-23 00:54:13.949280 | orchestrator | Monday 23 March 2026 00:53:05 +0000 (0:00:04.298) 0:04:43.610 ********** 2026-03-23 00:54:13.949293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-23 00:54:13.949304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 00:54:13.949312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 00:54:13.949340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-23 00:54:13.949348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-23 00:54:13.949364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-23 00:54:13.949373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 00:54:13.949393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-23 00:54:13.949415 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.949423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 00:54:13.949446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-23 00:54:13.949459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-23 00:54:13.949467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-23 00:54:13.949489 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.949500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-23 00:54:13.949515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 00:54:13.949523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 00:54:13.949551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-23 00:54:13.949563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-23 00:54:13.949573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 00:54:13.949594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-23 00:54:13.949602 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.949610 | orchestrator | 2026-03-23 00:54:13.949617 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-23 00:54:13.949624 | orchestrator | Monday 23 March 2026 00:53:06 +0000 (0:00:00.863) 0:04:44.473 ********** 2026-03-23 00:54:13.949631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-23 00:54:13.949638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-23 00:54:13.949646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-23 00:54:13.949654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-23 00:54:13.949661 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.949668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-23 00:54:13.949676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-23 00:54:13.949683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-23 00:54:13.949690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-23 00:54:13.949702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-23 00:54:13.949709 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.949717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-23 00:54:13.949730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-23 00:54:13.949741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-23 00:54:13.949748 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.949755 | orchestrator | 2026-03-23 00:54:13.949763 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-23 00:54:13.949770 | orchestrator | Monday 23 March 2026 00:53:07 +0000 (0:00:01.123) 0:04:45.596 ********** 2026-03-23 00:54:13.949777 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.949784 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.949791 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.949798 | orchestrator | 2026-03-23 00:54:13.949805 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-23 00:54:13.949812 | orchestrator | Monday 23 March 2026 00:53:08 +0000 (0:00:00.402) 0:04:45.998 ********** 2026-03-23 00:54:13.949819 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.949826 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.949833 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.949840 | orchestrator | 2026-03-23 00:54:13.949846 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-23 00:54:13.949854 | orchestrator | Monday 23 March 2026 00:53:09 +0000 (0:00:01.121) 0:04:47.120 ********** 2026-03-23 00:54:13.949861 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.949867 | orchestrator | 2026-03-23 00:54:13.949874 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-23 00:54:13.949881 | orchestrator | Monday 23 March 2026 00:53:10 +0000 (0:00:01.282) 0:04:48.403 ********** 2026-03-23 00:54:13.949888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-23 00:54:13.949897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-23 00:54:13.949919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-23 00:54:13.949927 | orchestrator | 2026-03-23 00:54:13.949934 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-23 00:54:13.949942 | orchestrator | Monday 23 March 2026 00:53:13 +0000 (0:00:02.401) 0:04:50.804 ********** 2026-03-23 00:54:13.949949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-23 00:54:13.949956 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.949963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-23 00:54:13.949970 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.949978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-23 00:54:13.949992 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.949999 | orchestrator | 2026-03-23 00:54:13.950006 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-23 00:54:13.950013 | orchestrator | Monday 23 March 2026 00:53:13 +0000 (0:00:00.419) 0:04:51.223 ********** 2026-03-23 00:54:13.950054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-23 00:54:13.950062 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.950069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-23 00:54:13.950076 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.950084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-23 00:54:13.950094 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.950101 | orchestrator | 2026-03-23 00:54:13.950108 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-23 00:54:13.950115 | orchestrator | Monday 23 March 2026 00:53:14 +0000 (0:00:00.612) 0:04:51.836 ********** 2026-03-23 00:54:13.950122 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.950128 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.950135 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.950142 | orchestrator | 2026-03-23 00:54:13.950149 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-23 00:54:13.950157 | orchestrator | Monday 23 March 2026 00:53:14 +0000 (0:00:00.790) 0:04:52.626 ********** 2026-03-23 00:54:13.950164 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.950171 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.950178 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.950185 | orchestrator | 2026-03-23 00:54:13.950192 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-23 00:54:13.950199 | orchestrator | Monday 23 March 2026 00:53:16 +0000 (0:00:01.343) 0:04:53.970 ********** 2026-03-23 00:54:13.950206 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:54:13.950212 | orchestrator | 2026-03-23 00:54:13.950218 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-23 00:54:13.950224 | orchestrator | Monday 23 March 2026 00:53:17 +0000 (0:00:01.420) 0:04:55.391 ********** 2026-03-23 00:54:13.950231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.950246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.950258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.950287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.950294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.950302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-23 00:54:13.950314 | orchestrator | 2026-03-23 00:54:13.950321 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-23 00:54:13.950328 | orchestrator | Monday 23 March 2026 00:53:24 +0000 (0:00:06.460) 0:05:01.851 ********** 2026-03-23 00:54:13.950339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.950350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.950357 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.950365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.950372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.950385 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.950392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.950407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-23 00:54:13.950415 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.950422 | orchestrator | 2026-03-23 00:54:13.950429 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-23 00:54:13.950436 | orchestrator | Monday 23 March 2026 00:53:25 +0000 (0:00:01.012) 0:05:02.863 ********** 2026-03-23 00:54:13.950443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-23 00:54:13.950450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-23 00:54:13.950457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-23 00:54:13.950464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-23 00:54:13.950477 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.950484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-23 00:54:13.950491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-23 00:54:13.950498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-23 00:54:13.950505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-23 00:54:13.950512 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.950519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-23 00:54:13.950526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-23 00:54:13.950533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-23 00:54:13.950540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-23 00:54:13.950547 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.950553 | orchestrator | 2026-03-23 00:54:13.950560 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-23 00:54:13.950567 | orchestrator | Monday 23 March 2026 00:53:26 +0000 (0:00:01.049) 0:05:03.913 ********** 2026-03-23 00:54:13.950574 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.950581 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.950588 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.950595 | orchestrator | 2026-03-23 00:54:13.950601 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-23 00:54:13.950608 | orchestrator | Monday 23 March 2026 00:53:27 +0000 (0:00:01.290) 0:05:05.204 ********** 2026-03-23 00:54:13.950618 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.950625 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.950632 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.950639 | orchestrator | 2026-03-23 00:54:13.950645 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-23 00:54:13.950652 | orchestrator | Monday 23 March 2026 00:53:29 +0000 (0:00:02.277) 0:05:07.481 ********** 2026-03-23 00:54:13.950660 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.950666 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.950673 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.950680 | orchestrator | 2026-03-23 00:54:13.950687 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-23 00:54:13.950693 | orchestrator | Monday 23 March 2026 00:53:30 +0000 (0:00:00.614) 0:05:08.096 ********** 2026-03-23 00:54:13.950700 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.950711 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.950717 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.950724 | orchestrator | 2026-03-23 00:54:13.950736 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-23 00:54:13.950743 | orchestrator | Monday 23 March 2026 00:53:30 +0000 (0:00:00.330) 0:05:08.426 ********** 2026-03-23 00:54:13.950749 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.950756 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.950763 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.950770 | orchestrator | 2026-03-23 00:54:13.950776 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-23 00:54:13.950783 | orchestrator | Monday 23 March 2026 00:53:30 +0000 (0:00:00.314) 0:05:08.741 ********** 2026-03-23 00:54:13.950790 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.950797 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.950804 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.950811 | orchestrator | 2026-03-23 00:54:13.950818 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-23 00:54:13.950825 | orchestrator | Monday 23 March 2026 00:53:31 +0000 (0:00:00.306) 0:05:09.048 ********** 2026-03-23 00:54:13.950832 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.950840 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.950846 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.950853 | orchestrator | 2026-03-23 00:54:13.950860 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-23 00:54:13.950867 | orchestrator | Monday 23 March 2026 00:53:31 +0000 (0:00:00.602) 0:05:09.651 ********** 2026-03-23 00:54:13.950874 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.950881 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.950888 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.950895 | orchestrator | 2026-03-23 00:54:13.950901 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-23 00:54:13.950908 | orchestrator | Monday 23 March 2026 00:53:32 +0000 (0:00:00.529) 0:05:10.180 ********** 2026-03-23 00:54:13.950915 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.950923 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.950930 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.950937 | orchestrator | 2026-03-23 00:54:13.950944 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-23 00:54:13.950950 | orchestrator | Monday 23 March 2026 00:53:33 +0000 (0:00:00.680) 0:05:10.861 ********** 2026-03-23 00:54:13.950957 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.950964 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.950971 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.950977 | orchestrator | 2026-03-23 00:54:13.950984 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-23 00:54:13.950991 | orchestrator | Monday 23 March 2026 00:53:33 +0000 (0:00:00.641) 0:05:11.502 ********** 2026-03-23 00:54:13.950998 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.951005 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.951011 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.951018 | orchestrator | 2026-03-23 00:54:13.951025 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-23 00:54:13.951032 | orchestrator | Monday 23 March 2026 00:53:34 +0000 (0:00:00.946) 0:05:12.449 ********** 2026-03-23 00:54:13.951039 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.951046 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.951053 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.951060 | orchestrator | 2026-03-23 00:54:13.951067 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-23 00:54:13.951074 | orchestrator | Monday 23 March 2026 00:53:35 +0000 (0:00:01.034) 0:05:13.484 ********** 2026-03-23 00:54:13.951081 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.951087 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.951094 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.951101 | orchestrator | 2026-03-23 00:54:13.951108 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-23 00:54:13.951115 | orchestrator | Monday 23 March 2026 00:53:36 +0000 (0:00:00.966) 0:05:14.450 ********** 2026-03-23 00:54:13.951126 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.951133 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.951140 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.951146 | orchestrator | 2026-03-23 00:54:13.951153 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-23 00:54:13.951160 | orchestrator | Monday 23 March 2026 00:53:45 +0000 (0:00:08.536) 0:05:22.987 ********** 2026-03-23 00:54:13.951167 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.951174 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.951181 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.951187 | orchestrator | 2026-03-23 00:54:13.951195 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-23 00:54:13.951202 | orchestrator | Monday 23 March 2026 00:53:46 +0000 (0:00:01.091) 0:05:24.079 ********** 2026-03-23 00:54:13.951208 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.951214 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.951219 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.951226 | orchestrator | 2026-03-23 00:54:13.951231 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-23 00:54:13.951237 | orchestrator | Monday 23 March 2026 00:53:54 +0000 (0:00:08.366) 0:05:32.445 ********** 2026-03-23 00:54:13.951244 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.951255 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.951278 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.951284 | orchestrator | 2026-03-23 00:54:13.951290 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-23 00:54:13.951298 | orchestrator | Monday 23 March 2026 00:53:58 +0000 (0:00:03.703) 0:05:36.149 ********** 2026-03-23 00:54:13.951304 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:54:13.951311 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:54:13.951318 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:54:13.951324 | orchestrator | 2026-03-23 00:54:13.951331 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-23 00:54:13.951337 | orchestrator | Monday 23 March 2026 00:54:07 +0000 (0:00:09.024) 0:05:45.174 ********** 2026-03-23 00:54:13.951344 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.951351 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.951358 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.951365 | orchestrator | 2026-03-23 00:54:13.951376 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-23 00:54:13.951383 | orchestrator | Monday 23 March 2026 00:54:08 +0000 (0:00:00.677) 0:05:45.851 ********** 2026-03-23 00:54:13.951390 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.951397 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.951404 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.951411 | orchestrator | 2026-03-23 00:54:13.951417 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-23 00:54:13.951424 | orchestrator | Monday 23 March 2026 00:54:08 +0000 (0:00:00.336) 0:05:46.187 ********** 2026-03-23 00:54:13.951431 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.951438 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.951444 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.951451 | orchestrator | 2026-03-23 00:54:13.951458 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-23 00:54:13.951465 | orchestrator | Monday 23 March 2026 00:54:08 +0000 (0:00:00.340) 0:05:46.528 ********** 2026-03-23 00:54:13.951472 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.951478 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.951485 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.951492 | orchestrator | 2026-03-23 00:54:13.951499 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-23 00:54:13.951506 | orchestrator | Monday 23 March 2026 00:54:09 +0000 (0:00:00.344) 0:05:46.873 ********** 2026-03-23 00:54:13.951520 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.951526 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.951533 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.951540 | orchestrator | 2026-03-23 00:54:13.951547 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-23 00:54:13.951553 | orchestrator | Monday 23 March 2026 00:54:09 +0000 (0:00:00.704) 0:05:47.577 ********** 2026-03-23 00:54:13.951560 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:54:13.951567 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:54:13.951574 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:54:13.951580 | orchestrator | 2026-03-23 00:54:13.951587 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-23 00:54:13.951594 | orchestrator | Monday 23 March 2026 00:54:10 +0000 (0:00:00.350) 0:05:47.928 ********** 2026-03-23 00:54:13.951601 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.951608 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.951615 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.951622 | orchestrator | 2026-03-23 00:54:13.951628 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-23 00:54:13.951635 | orchestrator | Monday 23 March 2026 00:54:11 +0000 (0:00:00.965) 0:05:48.893 ********** 2026-03-23 00:54:13.951642 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:54:13.951648 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:54:13.951655 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:54:13.951662 | orchestrator | 2026-03-23 00:54:13.951669 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:54:13.951676 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-23 00:54:13.951684 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-23 00:54:13.951691 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-23 00:54:13.951697 | orchestrator | 2026-03-23 00:54:13.951704 | orchestrator | 2026-03-23 00:54:13.951711 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:54:13.951718 | orchestrator | Monday 23 March 2026 00:54:11 +0000 (0:00:00.806) 0:05:49.700 ********** 2026-03-23 00:54:13.951724 | orchestrator | =============================================================================== 2026-03-23 00:54:13.951731 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.02s 2026-03-23 00:54:13.951738 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.54s 2026-03-23 00:54:13.951744 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.37s 2026-03-23 00:54:13.951751 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.46s 2026-03-23 00:54:13.951758 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.18s 2026-03-23 00:54:13.951765 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.95s 2026-03-23 00:54:13.951771 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.85s 2026-03-23 00:54:13.951778 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.30s 2026-03-23 00:54:13.951784 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.30s 2026-03-23 00:54:13.951796 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.27s 2026-03-23 00:54:13.951803 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 3.74s 2026-03-23 00:54:13.951810 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.72s 2026-03-23 00:54:13.951816 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.70s 2026-03-23 00:54:13.951823 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.61s 2026-03-23 00:54:13.951839 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.57s 2026-03-23 00:54:13.951847 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.56s 2026-03-23 00:54:13.951854 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.54s 2026-03-23 00:54:13.951865 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.45s 2026-03-23 00:54:13.951872 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.44s 2026-03-23 00:54:13.951879 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.42s 2026-03-23 00:54:13.951886 | orchestrator | 2026-03-23 00:54:13 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:13.951894 | orchestrator | 2026-03-23 00:54:13 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:13.951901 | orchestrator | 2026-03-23 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:16.998763 | orchestrator | 2026-03-23 00:54:16 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:16.999951 | orchestrator | 2026-03-23 00:54:16 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:17.002292 | orchestrator | 2026-03-23 00:54:17 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:17.005327 | orchestrator | 2026-03-23 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:20.057036 | orchestrator | 2026-03-23 00:54:20 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:20.059741 | orchestrator | 2026-03-23 00:54:20 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:20.061145 | orchestrator | 2026-03-23 00:54:20 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:20.061211 | orchestrator | 2026-03-23 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:23.096174 | orchestrator | 2026-03-23 00:54:23 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:23.096767 | orchestrator | 2026-03-23 00:54:23 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:23.097793 | orchestrator | 2026-03-23 00:54:23 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:23.097840 | orchestrator | 2026-03-23 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:26.137291 | orchestrator | 2026-03-23 00:54:26 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:26.137608 | orchestrator | 2026-03-23 00:54:26 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:26.139731 | orchestrator | 2026-03-23 00:54:26 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:26.139790 | orchestrator | 2026-03-23 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:29.170651 | orchestrator | 2026-03-23 00:54:29 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:29.171090 | orchestrator | 2026-03-23 00:54:29 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:29.171639 | orchestrator | 2026-03-23 00:54:29 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:29.171669 | orchestrator | 2026-03-23 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:32.224100 | orchestrator | 2026-03-23 00:54:32 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:32.224517 | orchestrator | 2026-03-23 00:54:32 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:32.225088 | orchestrator | 2026-03-23 00:54:32 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:32.225103 | orchestrator | 2026-03-23 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:35.256665 | orchestrator | 2026-03-23 00:54:35 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:35.259092 | orchestrator | 2026-03-23 00:54:35 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:35.259913 | orchestrator | 2026-03-23 00:54:35 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:35.259960 | orchestrator | 2026-03-23 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:38.286854 | orchestrator | 2026-03-23 00:54:38 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:38.287061 | orchestrator | 2026-03-23 00:54:38 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:38.289592 | orchestrator | 2026-03-23 00:54:38 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:38.289646 | orchestrator | 2026-03-23 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:41.333901 | orchestrator | 2026-03-23 00:54:41 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:41.334088 | orchestrator | 2026-03-23 00:54:41 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:41.335886 | orchestrator | 2026-03-23 00:54:41 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:41.336531 | orchestrator | 2026-03-23 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:44.370932 | orchestrator | 2026-03-23 00:54:44 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:44.371422 | orchestrator | 2026-03-23 00:54:44 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:44.372102 | orchestrator | 2026-03-23 00:54:44 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:44.372134 | orchestrator | 2026-03-23 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:47.409503 | orchestrator | 2026-03-23 00:54:47 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:47.411453 | orchestrator | 2026-03-23 00:54:47 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:47.411989 | orchestrator | 2026-03-23 00:54:47 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:47.412045 | orchestrator | 2026-03-23 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:50.445532 | orchestrator | 2026-03-23 00:54:50 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:50.447003 | orchestrator | 2026-03-23 00:54:50 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:50.447714 | orchestrator | 2026-03-23 00:54:50 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:50.447731 | orchestrator | 2026-03-23 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:53.500570 | orchestrator | 2026-03-23 00:54:53 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:53.505163 | orchestrator | 2026-03-23 00:54:53 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:53.509176 | orchestrator | 2026-03-23 00:54:53 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:53.509260 | orchestrator | 2026-03-23 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:56.552484 | orchestrator | 2026-03-23 00:54:56 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:56.554710 | orchestrator | 2026-03-23 00:54:56 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:56.555513 | orchestrator | 2026-03-23 00:54:56 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:56.555534 | orchestrator | 2026-03-23 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:54:59.602140 | orchestrator | 2026-03-23 00:54:59 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:54:59.603047 | orchestrator | 2026-03-23 00:54:59 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:54:59.605460 | orchestrator | 2026-03-23 00:54:59 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:54:59.606429 | orchestrator | 2026-03-23 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:02.670948 | orchestrator | 2026-03-23 00:55:02 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:02.672984 | orchestrator | 2026-03-23 00:55:02 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:02.674988 | orchestrator | 2026-03-23 00:55:02 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:02.675055 | orchestrator | 2026-03-23 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:05.717819 | orchestrator | 2026-03-23 00:55:05 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:05.719567 | orchestrator | 2026-03-23 00:55:05 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:05.720060 | orchestrator | 2026-03-23 00:55:05 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:05.720595 | orchestrator | 2026-03-23 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:08.767314 | orchestrator | 2026-03-23 00:55:08 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:08.769471 | orchestrator | 2026-03-23 00:55:08 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:08.770843 | orchestrator | 2026-03-23 00:55:08 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:08.770939 | orchestrator | 2026-03-23 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:11.824299 | orchestrator | 2026-03-23 00:55:11 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:11.827049 | orchestrator | 2026-03-23 00:55:11 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:11.829248 | orchestrator | 2026-03-23 00:55:11 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:11.829847 | orchestrator | 2026-03-23 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:14.868937 | orchestrator | 2026-03-23 00:55:14 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:14.869156 | orchestrator | 2026-03-23 00:55:14 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:14.872494 | orchestrator | 2026-03-23 00:55:14 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:14.872552 | orchestrator | 2026-03-23 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:17.934722 | orchestrator | 2026-03-23 00:55:17 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:17.936904 | orchestrator | 2026-03-23 00:55:17 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:17.939911 | orchestrator | 2026-03-23 00:55:17 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:17.939983 | orchestrator | 2026-03-23 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:20.990421 | orchestrator | 2026-03-23 00:55:20 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:20.993265 | orchestrator | 2026-03-23 00:55:20 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:20.994850 | orchestrator | 2026-03-23 00:55:20 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:20.994945 | orchestrator | 2026-03-23 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:24.033303 | orchestrator | 2026-03-23 00:55:24 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:24.035719 | orchestrator | 2026-03-23 00:55:24 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:24.037575 | orchestrator | 2026-03-23 00:55:24 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:24.037782 | orchestrator | 2026-03-23 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:27.082755 | orchestrator | 2026-03-23 00:55:27 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:27.086946 | orchestrator | 2026-03-23 00:55:27 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:27.089915 | orchestrator | 2026-03-23 00:55:27 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:27.090183 | orchestrator | 2026-03-23 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:30.134118 | orchestrator | 2026-03-23 00:55:30 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:30.135312 | orchestrator | 2026-03-23 00:55:30 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:30.136872 | orchestrator | 2026-03-23 00:55:30 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:30.136909 | orchestrator | 2026-03-23 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:33.187407 | orchestrator | 2026-03-23 00:55:33 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:33.189011 | orchestrator | 2026-03-23 00:55:33 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:33.190508 | orchestrator | 2026-03-23 00:55:33 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:33.190632 | orchestrator | 2026-03-23 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:36.234899 | orchestrator | 2026-03-23 00:55:36 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:36.235799 | orchestrator | 2026-03-23 00:55:36 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:36.237989 | orchestrator | 2026-03-23 00:55:36 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:36.238110 | orchestrator | 2026-03-23 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:39.278210 | orchestrator | 2026-03-23 00:55:39 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:39.279463 | orchestrator | 2026-03-23 00:55:39 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:39.281773 | orchestrator | 2026-03-23 00:55:39 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:39.281876 | orchestrator | 2026-03-23 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:42.335088 | orchestrator | 2026-03-23 00:55:42 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:42.336434 | orchestrator | 2026-03-23 00:55:42 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:42.338208 | orchestrator | 2026-03-23 00:55:42 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:42.338255 | orchestrator | 2026-03-23 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:45.382704 | orchestrator | 2026-03-23 00:55:45 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:45.384523 | orchestrator | 2026-03-23 00:55:45 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:45.388464 | orchestrator | 2026-03-23 00:55:45 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:45.388539 | orchestrator | 2026-03-23 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:48.432108 | orchestrator | 2026-03-23 00:55:48 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:48.434954 | orchestrator | 2026-03-23 00:55:48 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:48.435579 | orchestrator | 2026-03-23 00:55:48 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:48.435626 | orchestrator | 2026-03-23 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:51.465341 | orchestrator | 2026-03-23 00:55:51 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:51.467222 | orchestrator | 2026-03-23 00:55:51 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:51.468472 | orchestrator | 2026-03-23 00:55:51 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:51.468510 | orchestrator | 2026-03-23 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:54.521282 | orchestrator | 2026-03-23 00:55:54 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:54.521889 | orchestrator | 2026-03-23 00:55:54 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:54.523346 | orchestrator | 2026-03-23 00:55:54 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:54.523387 | orchestrator | 2026-03-23 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:55:57.579470 | orchestrator | 2026-03-23 00:55:57 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:55:57.582413 | orchestrator | 2026-03-23 00:55:57 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:55:57.584529 | orchestrator | 2026-03-23 00:55:57 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:55:57.584586 | orchestrator | 2026-03-23 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:00.631124 | orchestrator | 2026-03-23 00:56:00 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:00.632941 | orchestrator | 2026-03-23 00:56:00 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:00.634353 | orchestrator | 2026-03-23 00:56:00 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:56:00.634712 | orchestrator | 2026-03-23 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:03.682041 | orchestrator | 2026-03-23 00:56:03 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:03.682995 | orchestrator | 2026-03-23 00:56:03 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:03.688271 | orchestrator | 2026-03-23 00:56:03 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:56:03.688324 | orchestrator | 2026-03-23 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:06.738106 | orchestrator | 2026-03-23 00:56:06 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:06.741441 | orchestrator | 2026-03-23 00:56:06 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:06.743302 | orchestrator | 2026-03-23 00:56:06 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state STARTED 2026-03-23 00:56:06.743359 | orchestrator | 2026-03-23 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:09.797879 | orchestrator | 2026-03-23 00:56:09 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:09.801648 | orchestrator | 2026-03-23 00:56:09 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:09.803973 | orchestrator | 2026-03-23 00:56:09 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:09.811937 | orchestrator | 2026-03-23 00:56:09 | INFO  | Task 3bd2ccf8-c46a-49b9-82a8-72ab3ee926ca is in state SUCCESS 2026-03-23 00:56:09.813878 | orchestrator | 2026-03-23 00:56:09.813918 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-23 00:56:09.813924 | orchestrator | 2.16.14 2026-03-23 00:56:09.813930 | orchestrator | 2026-03-23 00:56:09.813936 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-23 00:56:09.813972 | orchestrator | 2026-03-23 00:56:09.813977 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-23 00:56:09.813981 | orchestrator | Monday 23 March 2026 00:45:45 +0000 (0:00:00.754) 0:00:00.754 ********** 2026-03-23 00:56:09.813996 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.814001 | orchestrator | 2026-03-23 00:56:09.814005 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-23 00:56:09.814008 | orchestrator | Monday 23 March 2026 00:45:47 +0000 (0:00:01.063) 0:00:01.817 ********** 2026-03-23 00:56:09.814011 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.814054 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.814058 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.814062 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.814065 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.814068 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.814072 | orchestrator | 2026-03-23 00:56:09.814075 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-23 00:56:09.814078 | orchestrator | Monday 23 March 2026 00:45:49 +0000 (0:00:01.968) 0:00:03.785 ********** 2026-03-23 00:56:09.814105 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.814121 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.814124 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.814127 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.814130 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.814134 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.814137 | orchestrator | 2026-03-23 00:56:09.814140 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-23 00:56:09.814143 | orchestrator | Monday 23 March 2026 00:45:49 +0000 (0:00:00.644) 0:00:04.430 ********** 2026-03-23 00:56:09.814147 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.814150 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.814168 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.814173 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.814193 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.814199 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.814223 | orchestrator | 2026-03-23 00:56:09.814250 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-23 00:56:09.814257 | orchestrator | Monday 23 March 2026 00:45:50 +0000 (0:00:00.907) 0:00:05.337 ********** 2026-03-23 00:56:09.814308 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.814313 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.814316 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.814319 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.814322 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.814325 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.814328 | orchestrator | 2026-03-23 00:56:09.814331 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-23 00:56:09.814335 | orchestrator | Monday 23 March 2026 00:45:51 +0000 (0:00:01.099) 0:00:06.437 ********** 2026-03-23 00:56:09.814338 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.814341 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.814344 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.814347 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.814350 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.814353 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.814356 | orchestrator | 2026-03-23 00:56:09.814359 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-23 00:56:09.814362 | orchestrator | Monday 23 March 2026 00:45:52 +0000 (0:00:00.935) 0:00:07.372 ********** 2026-03-23 00:56:09.814366 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.814369 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.814372 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.814378 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.814381 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.814385 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.814388 | orchestrator | 2026-03-23 00:56:09.814391 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-23 00:56:09.814394 | orchestrator | Monday 23 March 2026 00:45:54 +0000 (0:00:01.858) 0:00:09.230 ********** 2026-03-23 00:56:09.814397 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.814401 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.814404 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.814407 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.814410 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.814413 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.814416 | orchestrator | 2026-03-23 00:56:09.814419 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-23 00:56:09.814422 | orchestrator | Monday 23 March 2026 00:45:55 +0000 (0:00:00.690) 0:00:09.921 ********** 2026-03-23 00:56:09.814425 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.814428 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.814431 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.814435 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.814438 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.814441 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.814444 | orchestrator | 2026-03-23 00:56:09.814451 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-23 00:56:09.814457 | orchestrator | Monday 23 March 2026 00:45:55 +0000 (0:00:00.848) 0:00:10.770 ********** 2026-03-23 00:56:09.814462 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-23 00:56:09.814467 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-23 00:56:09.814473 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-23 00:56:09.814476 | orchestrator | 2026-03-23 00:56:09.814479 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-23 00:56:09.814482 | orchestrator | Monday 23 March 2026 00:45:56 +0000 (0:00:00.597) 0:00:11.367 ********** 2026-03-23 00:56:09.814486 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.814489 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.814493 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.814504 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.814508 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.814512 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.814515 | orchestrator | 2026-03-23 00:56:09.814519 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-23 00:56:09.814523 | orchestrator | Monday 23 March 2026 00:45:57 +0000 (0:00:01.299) 0:00:12.666 ********** 2026-03-23 00:56:09.814527 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-23 00:56:09.814530 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-23 00:56:09.814547 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-23 00:56:09.814553 | orchestrator | 2026-03-23 00:56:09.814558 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-23 00:56:09.814563 | orchestrator | Monday 23 March 2026 00:46:00 +0000 (0:00:02.878) 0:00:15.545 ********** 2026-03-23 00:56:09.814568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-23 00:56:09.814573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-23 00:56:09.814579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-23 00:56:09.814583 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.814586 | orchestrator | 2026-03-23 00:56:09.814590 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-23 00:56:09.814593 | orchestrator | Monday 23 March 2026 00:46:01 +0000 (0:00:00.816) 0:00:16.361 ********** 2026-03-23 00:56:09.814598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.814605 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.814611 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.814616 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.814621 | orchestrator | 2026-03-23 00:56:09.814627 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-23 00:56:09.814632 | orchestrator | Monday 23 March 2026 00:46:02 +0000 (0:00:01.109) 0:00:17.471 ********** 2026-03-23 00:56:09.814638 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.814649 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.814654 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.814658 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.814662 | orchestrator | 2026-03-23 00:56:09.814665 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-23 00:56:09.814669 | orchestrator | Monday 23 March 2026 00:46:03 +0000 (0:00:00.338) 0:00:17.809 ********** 2026-03-23 00:56:09.814679 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-23 00:45:58.629109', 'end': '2026-03-23 00:45:58.731134', 'delta': '0:00:00.102025', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.814684 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-23 00:45:59.622959', 'end': '2026-03-23 00:45:59.731096', 'delta': '0:00:00.108137', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.814688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-23 00:46:00.397087', 'end': '2026-03-23 00:46:00.494704', 'delta': '0:00:00.097617', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.814693 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.814698 | orchestrator | 2026-03-23 00:56:09.814704 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-23 00:56:09.814709 | orchestrator | Monday 23 March 2026 00:46:03 +0000 (0:00:00.623) 0:00:18.432 ********** 2026-03-23 00:56:09.814714 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.814718 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.814724 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.814728 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.814733 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.814738 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.814746 | orchestrator | 2026-03-23 00:56:09.814752 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-23 00:56:09.814757 | orchestrator | Monday 23 March 2026 00:46:05 +0000 (0:00:02.203) 0:00:20.636 ********** 2026-03-23 00:56:09.814762 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-23 00:56:09.814768 | orchestrator | 2026-03-23 00:56:09.814772 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-23 00:56:09.814776 | orchestrator | Monday 23 March 2026 00:46:07 +0000 (0:00:01.898) 0:00:22.534 ********** 2026-03-23 00:56:09.814780 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.814783 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.814787 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.814791 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.814799 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.814802 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.814844 | orchestrator | 2026-03-23 00:56:09.814848 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-23 00:56:09.814852 | orchestrator | Monday 23 March 2026 00:46:09 +0000 (0:00:01.359) 0:00:23.894 ********** 2026-03-23 00:56:09.814856 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.814859 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.814863 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.814866 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.814870 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.814874 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.814877 | orchestrator | 2026-03-23 00:56:09.814881 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-23 00:56:09.814884 | orchestrator | Monday 23 March 2026 00:46:10 +0000 (0:00:01.038) 0:00:24.932 ********** 2026-03-23 00:56:09.814888 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.814892 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.814896 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.814899 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.814903 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.814906 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.814910 | orchestrator | 2026-03-23 00:56:09.814913 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-23 00:56:09.814917 | orchestrator | Monday 23 March 2026 00:46:10 +0000 (0:00:00.830) 0:00:25.763 ********** 2026-03-23 00:56:09.814948 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.814954 | orchestrator | 2026-03-23 00:56:09.814959 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-23 00:56:09.814964 | orchestrator | Monday 23 March 2026 00:46:11 +0000 (0:00:00.097) 0:00:25.860 ********** 2026-03-23 00:56:09.814970 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.814975 | orchestrator | 2026-03-23 00:56:09.814980 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-23 00:56:09.814985 | orchestrator | Monday 23 March 2026 00:46:11 +0000 (0:00:00.291) 0:00:26.152 ********** 2026-03-23 00:56:09.814990 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.814996 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.815001 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.815011 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.815016 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.815021 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.815025 | orchestrator | 2026-03-23 00:56:09.815028 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-23 00:56:09.815032 | orchestrator | Monday 23 March 2026 00:46:12 +0000 (0:00:00.683) 0:00:26.835 ********** 2026-03-23 00:56:09.815039 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.815043 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.815046 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.815062 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.815066 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.815069 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.815073 | orchestrator | 2026-03-23 00:56:09.815076 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-23 00:56:09.815090 | orchestrator | Monday 23 March 2026 00:46:12 +0000 (0:00:00.913) 0:00:27.749 ********** 2026-03-23 00:56:09.815094 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.815097 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.815101 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.815105 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.815108 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.815112 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.815115 | orchestrator | 2026-03-23 00:56:09.815119 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-23 00:56:09.815123 | orchestrator | Monday 23 March 2026 00:46:13 +0000 (0:00:00.692) 0:00:28.442 ********** 2026-03-23 00:56:09.815126 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.815130 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.815150 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.815155 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.815158 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.815163 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.815168 | orchestrator | 2026-03-23 00:56:09.815174 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-23 00:56:09.815179 | orchestrator | Monday 23 March 2026 00:46:14 +0000 (0:00:00.951) 0:00:29.393 ********** 2026-03-23 00:56:09.815196 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.815199 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.815203 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.815206 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.815210 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.815213 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.815217 | orchestrator | 2026-03-23 00:56:09.815220 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-23 00:56:09.815224 | orchestrator | Monday 23 March 2026 00:46:15 +0000 (0:00:00.575) 0:00:29.968 ********** 2026-03-23 00:56:09.815227 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.815231 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.815235 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.815240 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.815245 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.815250 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.815255 | orchestrator | 2026-03-23 00:56:09.815260 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-23 00:56:09.815266 | orchestrator | Monday 23 March 2026 00:46:16 +0000 (0:00:00.825) 0:00:30.794 ********** 2026-03-23 00:56:09.815270 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.815305 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.815309 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.815313 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.815316 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.815320 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.815324 | orchestrator | 2026-03-23 00:56:09.815329 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-23 00:56:09.815337 | orchestrator | Monday 23 March 2026 00:46:16 +0000 (0:00:00.753) 0:00:31.547 ********** 2026-03-23 00:56:09.815345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e8fe5fb--1ce5--58e9--8668--0121db885e3a-osd--block--4e8fe5fb--1ce5--58e9--8668--0121db885e3a', 'dm-uuid-LVM-lMkBvxv10W02N8c4sobLQ0h29HKaWnFCR7cPhV5ZeYAO6LBG1U6Q8KuacaSm9W1D'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--64892dc7--40b9--50f4--a971--7ffdf1a56e40-osd--block--64892dc7--40b9--50f4--a971--7ffdf1a56e40', 'dm-uuid-LVM-kA6tF1EZr181HQ0V3skfDtYPJE1uMad9Sq3O4mjyCfcPDaJcjYKpbrcb0QmhBKlb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1bf36823--02d4--5086--a00f--5e3efdd328af-osd--block--1bf36823--02d4--5086--a00f--5e3efdd328af', 'dm-uuid-LVM-46WGyBqFiFffrkmN36ciuiQ5cckjL07GJJzRosi8GKlEOx76gBYFGnAqtBX1cxDm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--92a7bb1e--121d--56dc--8fa7--94c9c65422a6-osd--block--92a7bb1e--121d--56dc--8fa7--94c9c65422a6', 'dm-uuid-LVM-pjMACQ4vEJDQ2evYfnAhlh3dKWsldOpt336bhYbGPyPWqVJE2N5AWnWzKl6KddjT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part1', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part14', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part15', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part16', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.815531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.815544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1bf36823--02d4--5086--a00f--5e3efdd328af-osd--block--1bf36823--02d4--5086--a00f--5e3efdd328af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kEE7Ck-8cxh-3YgF-isQE-C5eu-xXzy-3tTWUP', 'scsi-0QEMU_QEMU_HARDDISK_77dd2124-92bc-4f46-82be-f9b228a0677e', 'scsi-SQEMU_QEMU_HARDDISK_77dd2124-92bc-4f46-82be-f9b228a0677e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.815549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4e8fe5fb--1ce5--58e9--8668--0121db885e3a-osd--block--4e8fe5fb--1ce5--58e9--8668--0121db885e3a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PKFKTr-pcsF-qj8g-I47G-ODeh-oqUN-pjqrkV', 'scsi-0QEMU_QEMU_HARDDISK_1d2a1acf-b303-4df2-8937-2ee8f9bbf12f', 'scsi-SQEMU_QEMU_HARDDISK_1d2a1acf-b303-4df2-8937-2ee8f9bbf12f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.815553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b7e7e409--387b--5e35--af60--96efea6ce8aa-osd--block--b7e7e409--387b--5e35--af60--96efea6ce8aa', 'dm-uuid-LVM-HrrdHKVvlffigjb21JUaHBk7nln1BlPkaHRqnZG62YT1PnrapsdzAe9Rck9gjuMK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6fa6fe99--be0d--55bf--a5b2--66c7db596be7-osd--block--6fa6fe99--be0d--55bf--a5b2--66c7db596be7', 'dm-uuid-LVM-1HDuY7LP7KT9iCr7bqCcrJ45J4jOmY5I09TE9ct2aroQWcsilZzsrpqQJmwrazJB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.815569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--92a7bb1e--121d--56dc--8fa7--94c9c65422a6-osd--block--92a7bb1e--121d--56dc--8fa7--94c9c65422a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vK0CLZ-Zkn8-NYp8-uCt5-hT6I-IUy5-Sf42U6', 'scsi-0QEMU_QEMU_HARDDISK_0331d52b-cef6-4339-b12c-c63469d626c6', 'scsi-SQEMU_QEMU_HARDDISK_0331d52b-cef6-4339-b12c-c63469d626c6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--64892dc7--40b9--50f4--a971--7ffdf1a56e40-osd--block--64892dc7--40b9--50f4--a971--7ffdf1a56e40'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5etLp2-aUyt-7xxq-o9eL-0H8i-eimR-6PPrxd', 'scsi-0QEMU_QEMU_HARDDISK_c3b20d12-9473-438c-9aa2-c72737b9e6d0', 'scsi-SQEMU_QEMU_HARDDISK_c3b20d12-9473-438c-9aa2-c72737b9e6d0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d03a194-715d-49d1-b802-c824960a80c4', 'scsi-SQEMU_QEMU_HARDDISK_6d03a194-715d-49d1-b802-c824960a80c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5', 'scsi-SQEMU_QEMU_HARDDISK_56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part1', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part14', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part15', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part16', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b7e7e409--387b--5e35--af60--96efea6ce8aa-osd--block--b7e7e409--387b--5e35--af60--96efea6ce8aa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jnm62G-v7Cy-4iJo-dTjS-LtgQ-XTBq-aem6vq', 'scsi-0QEMU_QEMU_HARDDISK_59b4a83f-d9c4-4d19-8941-518108c7531d', 'scsi-SQEMU_QEMU_HARDDISK_59b4a83f-d9c4-4d19-8941-518108c7531d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e', 'scsi-SQEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816288 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.816291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6fa6fe99--be0d--55bf--a5b2--66c7db596be7-osd--block--6fa6fe99--be0d--55bf--a5b2--66c7db596be7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oMXOqN-2up1-mMzD-oyo3-glzr-0BZQ-HD6hJ7', 'scsi-0QEMU_QEMU_HARDDISK_ff498ee2-e745-4049-bce7-87b4610f4b76', 'scsi-SQEMU_QEMU_HARDDISK_ff498ee2-e745-4049-bce7-87b4610f4b76'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6dc9e4a-bb14-4275-87ca-e10d4388766d', 'scsi-SQEMU_QEMU_HARDDISK_a6dc9e4a-bb14-4275-87ca-e10d4388766d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93', 'scsi-SQEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816350 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.816353 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.816356 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.816359 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.816364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:56:09.816395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49', 'scsi-SQEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part1', 'scsi-SQEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part14', 'scsi-SQEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part15', 'scsi-SQEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part16', 'scsi-SQEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:56:09.816405 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.816408 | orchestrator | 2026-03-23 00:56:09.816411 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-23 00:56:09.816415 | orchestrator | Monday 23 March 2026 00:46:18 +0000 (0:00:01.848) 0:00:33.396 ********** 2026-03-23 00:56:09.816420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e8fe5fb--1ce5--58e9--8668--0121db885e3a-osd--block--4e8fe5fb--1ce5--58e9--8668--0121db885e3a', 'dm-uuid-LVM-lMkBvxv10W02N8c4sobLQ0h29HKaWnFCR7cPhV5ZeYAO6LBG1U6Q8KuacaSm9W1D'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816424 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--64892dc7--40b9--50f4--a971--7ffdf1a56e40-osd--block--64892dc7--40b9--50f4--a971--7ffdf1a56e40', 'dm-uuid-LVM-kA6tF1EZr181HQ0V3skfDtYPJE1uMad9Sq3O4mjyCfcPDaJcjYKpbrcb0QmhBKlb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816429 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816432 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816447 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816450 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816453 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816458 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1bf36823--02d4--5086--a00f--5e3efdd328af-osd--block--1bf36823--02d4--5086--a00f--5e3efdd328af', 'dm-uuid-LVM-46WGyBqFiFffrkmN36ciuiQ5cckjL07GJJzRosi8GKlEOx76gBYFGnAqtBX1cxDm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816467 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4e8fe5fb--1ce5--58e9--8668--0121db885e3a-osd--block--4e8fe5fb--1ce5--58e9--8668--0121db885e3a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PKFKTr-pcsF-qj8g-I47G-ODeh-oqUN-pjqrkV', 'scsi-0QEMU_QEMU_HARDDISK_1d2a1acf-b303-4df2-8937-2ee8f9bbf12f', 'scsi-SQEMU_QEMU_HARDDISK_1d2a1acf-b303-4df2-8937-2ee8f9bbf12f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816479 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--92a7bb1e--121d--56dc--8fa7--94c9c65422a6-osd--block--92a7bb1e--121d--56dc--8fa7--94c9c65422a6', 'dm-uuid-LVM-pjMACQ4vEJDQ2evYfnAhlh3dKWsldOpt336bhYbGPyPWqVJE2N5AWnWzKl6KddjT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--64892dc7--40b9--50f4--a971--7ffdf1a56e40-osd--block--64892dc7--40b9--50f4--a971--7ffdf1a56e40'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5etLp2-aUyt-7xxq-o9eL-0H8i-eimR-6PPrxd', 'scsi-0QEMU_QEMU_HARDDISK_c3b20d12-9473-438c-9aa2-c72737b9e6d0', 'scsi-SQEMU_QEMU_HARDDISK_c3b20d12-9473-438c-9aa2-c72737b9e6d0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816491 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d03a194-715d-49d1-b802-c824960a80c4', 'scsi-SQEMU_QEMU_HARDDISK_6d03a194-715d-49d1-b802-c824960a80c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816497 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816502 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b7e7e409--387b--5e35--af60--96efea6ce8aa-osd--block--b7e7e409--387b--5e35--af60--96efea6ce8aa', 'dm-uuid-LVM-HrrdHKVvlffigjb21JUaHBk7nln1BlPkaHRqnZG62YT1PnrapsdzAe9Rck9gjuMK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816506 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816512 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.816517 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816521 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6fa6fe99--be0d--55bf--a5b2--66c7db596be7-osd--block--6fa6fe99--be0d--55bf--a5b2--66c7db596be7', 'dm-uuid-LVM-1HDuY7LP7KT9iCr7bqCcrJ45J4jOmY5I09TE9ct2aroQWcsilZzsrpqQJmwrazJB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816525 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816531 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816537 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816542 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816556 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816563 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816568 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816574 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816581 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816586 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816591 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816602 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816607 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816613 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816618 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816629 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e', 'scsi-SQEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9686d40-ed2c-40e1-8ef5-b5d90039fa5e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816638 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816644 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816649 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816657 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816663 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part1', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part14', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part15', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part16', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816672 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816679 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1bf36823--02d4--5086--a00f--5e3efdd328af-osd--block--1bf36823--02d4--5086--a00f--5e3efdd328af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kEE7Ck-8cxh-3YgF-isQE-C5eu-xXzy-3tTWUP', 'scsi-0QEMU_QEMU_HARDDISK_77dd2124-92bc-4f46-82be-f9b228a0677e', 'scsi-SQEMU_QEMU_HARDDISK_77dd2124-92bc-4f46-82be-f9b228a0677e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816685 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816903 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--92a7bb1e--121d--56dc--8fa7--94c9c65422a6-osd--block--92a7bb1e--121d--56dc--8fa7--94c9c65422a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vK0CLZ-Zkn8-NYp8-uCt5-hT6I-IUy5-Sf42U6', 'scsi-0QEMU_QEMU_HARDDISK_0331d52b-cef6-4339-b12c-c63469d626c6', 'scsi-SQEMU_QEMU_HARDDISK_0331d52b-cef6-4339-b12c-c63469d626c6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816912 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5', 'scsi-SQEMU_QEMU_HARDDISK_56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816916 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816919 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816931 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part1', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part14', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part15', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part16', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816939 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b7e7e409--387b--5e35--af60--96efea6ce8aa-osd--block--b7e7e409--387b--5e35--af60--96efea6ce8aa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jnm62G-v7Cy-4iJo-dTjS-LtgQ-XTBq-aem6vq', 'scsi-0QEMU_QEMU_HARDDISK_59b4a83f-d9c4-4d19-8941-518108c7531d', 'scsi-SQEMU_QEMU_HARDDISK_59b4a83f-d9c4-4d19-8941-518108c7531d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816944 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6fa6fe99--be0d--55bf--a5b2--66c7db596be7-osd--block--6fa6fe99--be0d--55bf--a5b2--66c7db596be7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oMXOqN-2up1-mMzD-oyo3-glzr-0BZQ-HD6hJ7', 'scsi-0QEMU_QEMU_HARDDISK_ff498ee2-e745-4049-bce7-87b4610f4b76', 'scsi-SQEMU_QEMU_HARDDISK_ff498ee2-e745-4049-bce7-87b4610f4b76'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816948 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6dc9e4a-bb14-4275-87ca-e10d4388766d', 'scsi-SQEMU_QEMU_HARDDISK_a6dc9e4a-bb14-4275-87ca-e10d4388766d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816955 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816959 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816962 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816965 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816970 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816973 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816979 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816984 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816988 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816993 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93', 'scsi-SQEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5e55cca-5656-41b4-9a27-a4492511de93-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.816999 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.817002 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.817007 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.817010 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.817013 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.817017 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.817020 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.817023 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.817026 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.817033 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.817037 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.817042 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.817045 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.817050 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49', 'scsi-SQEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part1', 'scsi-SQEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part14', 'scsi-SQEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part15', 'scsi-SQEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part16', 'scsi-SQEMU_QEMU_HARDDISK_c3578134-7537-4e48-a12d-a1d3ec7adf49-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.817056 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:56:09.817059 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.817063 | orchestrator | 2026-03-23 00:56:09.817067 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-23 00:56:09.817071 | orchestrator | Monday 23 March 2026 00:46:20 +0000 (0:00:01.607) 0:00:35.004 ********** 2026-03-23 00:56:09.817074 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.817077 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.817105 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.817111 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.817116 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.817121 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.817127 | orchestrator | 2026-03-23 00:56:09.817131 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-23 00:56:09.817137 | orchestrator | Monday 23 March 2026 00:46:21 +0000 (0:00:01.217) 0:00:36.221 ********** 2026-03-23 00:56:09.817144 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.817151 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.817156 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.817161 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.817166 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.817170 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.817175 | orchestrator | 2026-03-23 00:56:09.817180 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-23 00:56:09.817184 | orchestrator | Monday 23 March 2026 00:46:22 +0000 (0:00:01.107) 0:00:37.329 ********** 2026-03-23 00:56:09.817189 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.817195 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.817200 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.817205 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.817210 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.817215 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.817220 | orchestrator | 2026-03-23 00:56:09.817225 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-23 00:56:09.817231 | orchestrator | Monday 23 March 2026 00:46:23 +0000 (0:00:00.858) 0:00:38.188 ********** 2026-03-23 00:56:09.817235 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.817241 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.817245 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.817250 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.817255 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.817260 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.817265 | orchestrator | 2026-03-23 00:56:09.817270 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-23 00:56:09.817276 | orchestrator | Monday 23 March 2026 00:46:24 +0000 (0:00:00.733) 0:00:38.921 ********** 2026-03-23 00:56:09.817293 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.817298 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.817303 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.817308 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.817313 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.817316 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.817319 | orchestrator | 2026-03-23 00:56:09.817323 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-23 00:56:09.817326 | orchestrator | Monday 23 March 2026 00:46:25 +0000 (0:00:00.854) 0:00:39.776 ********** 2026-03-23 00:56:09.817329 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.817332 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.817335 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.817338 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.817341 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.817344 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.817347 | orchestrator | 2026-03-23 00:56:09.817351 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-23 00:56:09.817354 | orchestrator | Monday 23 March 2026 00:46:26 +0000 (0:00:01.149) 0:00:40.925 ********** 2026-03-23 00:56:09.817357 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-23 00:56:09.817360 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-23 00:56:09.817363 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-23 00:56:09.817369 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-23 00:56:09.817372 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-23 00:56:09.817375 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-23 00:56:09.817378 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-23 00:56:09.817381 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-23 00:56:09.817384 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-23 00:56:09.817387 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-23 00:56:09.817390 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-23 00:56:09.817393 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-23 00:56:09.817396 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-23 00:56:09.817399 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-23 00:56:09.817402 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-23 00:56:09.817405 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-23 00:56:09.817409 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-23 00:56:09.817412 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-23 00:56:09.817415 | orchestrator | 2026-03-23 00:56:09.817418 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-23 00:56:09.817421 | orchestrator | Monday 23 March 2026 00:46:30 +0000 (0:00:04.157) 0:00:45.083 ********** 2026-03-23 00:56:09.817424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-23 00:56:09.817427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-23 00:56:09.817430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-23 00:56:09.817433 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.817436 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-23 00:56:09.817442 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-23 00:56:09.817445 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-23 00:56:09.817448 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-23 00:56:09.817455 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-23 00:56:09.817458 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-23 00:56:09.817461 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.817464 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-23 00:56:09.817467 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-23 00:56:09.817470 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-23 00:56:09.817473 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.817476 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.817480 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-23 00:56:09.817483 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-23 00:56:09.817486 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-23 00:56:09.817489 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.817492 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-23 00:56:09.817495 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-23 00:56:09.817498 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-23 00:56:09.817501 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.817504 | orchestrator | 2026-03-23 00:56:09.817508 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-23 00:56:09.817511 | orchestrator | Monday 23 March 2026 00:46:31 +0000 (0:00:01.425) 0:00:46.508 ********** 2026-03-23 00:56:09.817514 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.817517 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.817520 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.817523 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.817526 | orchestrator | 2026-03-23 00:56:09.817530 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-23 00:56:09.817534 | orchestrator | Monday 23 March 2026 00:46:33 +0000 (0:00:01.391) 0:00:47.899 ********** 2026-03-23 00:56:09.817537 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.817541 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.817545 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.817548 | orchestrator | 2026-03-23 00:56:09.817552 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-23 00:56:09.817557 | orchestrator | Monday 23 March 2026 00:46:33 +0000 (0:00:00.346) 0:00:48.246 ********** 2026-03-23 00:56:09.817565 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.817571 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.817576 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.817581 | orchestrator | 2026-03-23 00:56:09.817586 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-23 00:56:09.817591 | orchestrator | Monday 23 March 2026 00:46:33 +0000 (0:00:00.301) 0:00:48.547 ********** 2026-03-23 00:56:09.817596 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.817601 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.817606 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.817610 | orchestrator | 2026-03-23 00:56:09.817616 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-23 00:56:09.817621 | orchestrator | Monday 23 March 2026 00:46:34 +0000 (0:00:00.289) 0:00:48.837 ********** 2026-03-23 00:56:09.817627 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.817632 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.817644 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.817649 | orchestrator | 2026-03-23 00:56:09.817654 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-23 00:56:09.817660 | orchestrator | Monday 23 March 2026 00:46:35 +0000 (0:00:00.935) 0:00:49.772 ********** 2026-03-23 00:56:09.817668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.817673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.817679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.817683 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.817687 | orchestrator | 2026-03-23 00:56:09.817690 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-23 00:56:09.817693 | orchestrator | Monday 23 March 2026 00:46:35 +0000 (0:00:00.364) 0:00:50.136 ********** 2026-03-23 00:56:09.817697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.817700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.817703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.817706 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.817709 | orchestrator | 2026-03-23 00:56:09.817712 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-23 00:56:09.817715 | orchestrator | Monday 23 March 2026 00:46:35 +0000 (0:00:00.445) 0:00:50.582 ********** 2026-03-23 00:56:09.817718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.817721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.817724 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.817728 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.817731 | orchestrator | 2026-03-23 00:56:09.817734 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-23 00:56:09.817737 | orchestrator | Monday 23 March 2026 00:46:36 +0000 (0:00:00.335) 0:00:50.918 ********** 2026-03-23 00:56:09.817740 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.817745 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.817751 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.817759 | orchestrator | 2026-03-23 00:56:09.817764 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-23 00:56:09.817768 | orchestrator | Monday 23 March 2026 00:46:36 +0000 (0:00:00.330) 0:00:51.248 ********** 2026-03-23 00:56:09.817773 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-23 00:56:09.817778 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-23 00:56:09.817787 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-23 00:56:09.817792 | orchestrator | 2026-03-23 00:56:09.817797 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-23 00:56:09.817803 | orchestrator | Monday 23 March 2026 00:46:37 +0000 (0:00:00.860) 0:00:52.108 ********** 2026-03-23 00:56:09.817808 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-23 00:56:09.817813 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-23 00:56:09.817818 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-23 00:56:09.817824 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-23 00:56:09.817827 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-23 00:56:09.817831 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-23 00:56:09.817834 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-23 00:56:09.817837 | orchestrator | 2026-03-23 00:56:09.817840 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-23 00:56:09.817843 | orchestrator | Monday 23 March 2026 00:46:38 +0000 (0:00:01.446) 0:00:53.555 ********** 2026-03-23 00:56:09.817846 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-23 00:56:09.817853 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-23 00:56:09.817856 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-23 00:56:09.817859 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-23 00:56:09.817862 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-23 00:56:09.817865 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-23 00:56:09.817868 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-23 00:56:09.817871 | orchestrator | 2026-03-23 00:56:09.817874 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-23 00:56:09.817877 | orchestrator | Monday 23 March 2026 00:46:40 +0000 (0:00:01.754) 0:00:55.310 ********** 2026-03-23 00:56:09.817881 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.817885 | orchestrator | 2026-03-23 00:56:09.817888 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-23 00:56:09.817891 | orchestrator | Monday 23 March 2026 00:46:41 +0000 (0:00:01.069) 0:00:56.380 ********** 2026-03-23 00:56:09.817895 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.817898 | orchestrator | 2026-03-23 00:56:09.817901 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-23 00:56:09.817904 | orchestrator | Monday 23 March 2026 00:46:42 +0000 (0:00:01.268) 0:00:57.648 ********** 2026-03-23 00:56:09.817907 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.817910 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.817915 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.817920 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.817931 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.817936 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.817941 | orchestrator | 2026-03-23 00:56:09.817946 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-23 00:56:09.817952 | orchestrator | Monday 23 March 2026 00:46:43 +0000 (0:00:00.928) 0:00:58.576 ********** 2026-03-23 00:56:09.817957 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.817962 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.817967 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.817972 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.817978 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.817983 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.817986 | orchestrator | 2026-03-23 00:56:09.817989 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-23 00:56:09.817992 | orchestrator | Monday 23 March 2026 00:46:44 +0000 (0:00:00.732) 0:00:59.308 ********** 2026-03-23 00:56:09.817995 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.817999 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.818002 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.818005 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.818008 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.818011 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.818040 | orchestrator | 2026-03-23 00:56:09.818043 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-23 00:56:09.818046 | orchestrator | Monday 23 March 2026 00:46:45 +0000 (0:00:00.896) 0:01:00.205 ********** 2026-03-23 00:56:09.818049 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.818052 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.818055 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.818059 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.818065 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.818068 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.818071 | orchestrator | 2026-03-23 00:56:09.818074 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-23 00:56:09.818077 | orchestrator | Monday 23 March 2026 00:46:46 +0000 (0:00:01.057) 0:01:01.263 ********** 2026-03-23 00:56:09.818095 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.818098 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.818102 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.818105 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.818108 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.818114 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.818117 | orchestrator | 2026-03-23 00:56:09.818121 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-23 00:56:09.818124 | orchestrator | Monday 23 March 2026 00:46:48 +0000 (0:00:01.757) 0:01:03.020 ********** 2026-03-23 00:56:09.818127 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.818130 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.818133 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.818136 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.818139 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.818142 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.818146 | orchestrator | 2026-03-23 00:56:09.818149 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-23 00:56:09.818152 | orchestrator | Monday 23 March 2026 00:46:49 +0000 (0:00:01.000) 0:01:04.021 ********** 2026-03-23 00:56:09.818155 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.818158 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.818161 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.818165 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.818170 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.818175 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.818183 | orchestrator | 2026-03-23 00:56:09.818188 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-23 00:56:09.818193 | orchestrator | Monday 23 March 2026 00:46:50 +0000 (0:00:00.775) 0:01:04.796 ********** 2026-03-23 00:56:09.818199 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.818205 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.818210 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.818215 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.818221 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.818226 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.818232 | orchestrator | 2026-03-23 00:56:09.818237 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-23 00:56:09.818242 | orchestrator | Monday 23 March 2026 00:46:52 +0000 (0:00:02.142) 0:01:06.938 ********** 2026-03-23 00:56:09.818245 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.818249 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.818252 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.818255 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.818259 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.818262 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.818265 | orchestrator | 2026-03-23 00:56:09.818268 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-23 00:56:09.818272 | orchestrator | Monday 23 March 2026 00:46:53 +0000 (0:00:01.644) 0:01:08.583 ********** 2026-03-23 00:56:09.818275 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.818278 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.818281 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.818285 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.818288 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.818291 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.818295 | orchestrator | 2026-03-23 00:56:09.818298 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-23 00:56:09.818306 | orchestrator | Monday 23 March 2026 00:46:54 +0000 (0:00:01.028) 0:01:09.612 ********** 2026-03-23 00:56:09.818309 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.818313 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.818316 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.818319 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.818322 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.818326 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.818329 | orchestrator | 2026-03-23 00:56:09.818332 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-23 00:56:09.818336 | orchestrator | Monday 23 March 2026 00:46:55 +0000 (0:00:00.839) 0:01:10.452 ********** 2026-03-23 00:56:09.818339 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.818342 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.818345 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.818351 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.818355 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.818358 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.818361 | orchestrator | 2026-03-23 00:56:09.818364 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-23 00:56:09.818367 | orchestrator | Monday 23 March 2026 00:46:56 +0000 (0:00:00.831) 0:01:11.283 ********** 2026-03-23 00:56:09.818370 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.818373 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.818377 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.818380 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.818383 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.818386 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.818390 | orchestrator | 2026-03-23 00:56:09.818393 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-23 00:56:09.818396 | orchestrator | Monday 23 March 2026 00:46:57 +0000 (0:00:00.522) 0:01:11.805 ********** 2026-03-23 00:56:09.818399 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.818402 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.818405 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.818408 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.818412 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.818415 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.818418 | orchestrator | 2026-03-23 00:56:09.818421 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-23 00:56:09.818424 | orchestrator | Monday 23 March 2026 00:46:57 +0000 (0:00:00.855) 0:01:12.661 ********** 2026-03-23 00:56:09.818428 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.818431 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.818434 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.818437 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.818440 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.818444 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.818448 | orchestrator | 2026-03-23 00:56:09.818453 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-23 00:56:09.818458 | orchestrator | Monday 23 March 2026 00:46:58 +0000 (0:00:00.542) 0:01:13.203 ********** 2026-03-23 00:56:09.818463 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.818468 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.818474 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.818479 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.818493 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.818497 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.818500 | orchestrator | 2026-03-23 00:56:09.818503 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-23 00:56:09.818507 | orchestrator | Monday 23 March 2026 00:46:59 +0000 (0:00:00.836) 0:01:14.040 ********** 2026-03-23 00:56:09.818510 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.818513 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.818516 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.818522 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.818526 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.818529 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.818532 | orchestrator | 2026-03-23 00:56:09.818535 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-23 00:56:09.818538 | orchestrator | Monday 23 March 2026 00:46:59 +0000 (0:00:00.513) 0:01:14.553 ********** 2026-03-23 00:56:09.818541 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.818545 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.818548 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.818551 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.818554 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.818557 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.818560 | orchestrator | 2026-03-23 00:56:09.818563 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-23 00:56:09.818567 | orchestrator | Monday 23 March 2026 00:47:00 +0000 (0:00:00.819) 0:01:15.372 ********** 2026-03-23 00:56:09.818570 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.818573 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.818576 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.818579 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.818582 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.818585 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.818589 | orchestrator | 2026-03-23 00:56:09.818592 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-23 00:56:09.818595 | orchestrator | Monday 23 March 2026 00:47:01 +0000 (0:00:01.143) 0:01:16.516 ********** 2026-03-23 00:56:09.818599 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.818604 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.818611 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.818618 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.818623 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.818628 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.818633 | orchestrator | 2026-03-23 00:56:09.818638 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-23 00:56:09.818643 | orchestrator | Monday 23 March 2026 00:47:03 +0000 (0:00:01.682) 0:01:18.199 ********** 2026-03-23 00:56:09.818648 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.818653 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.818659 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.818664 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.818669 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.818674 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.818679 | orchestrator | 2026-03-23 00:56:09.818684 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-23 00:56:09.818689 | orchestrator | Monday 23 March 2026 00:47:05 +0000 (0:00:02.369) 0:01:20.569 ********** 2026-03-23 00:56:09.818696 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.818701 | orchestrator | 2026-03-23 00:56:09.818706 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-23 00:56:09.818712 | orchestrator | Monday 23 March 2026 00:47:07 +0000 (0:00:01.436) 0:01:22.005 ********** 2026-03-23 00:56:09.818717 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.818722 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.818727 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.818735 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.818741 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.818746 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.818751 | orchestrator | 2026-03-23 00:56:09.818757 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-23 00:56:09.818762 | orchestrator | Monday 23 March 2026 00:47:07 +0000 (0:00:00.737) 0:01:22.743 ********** 2026-03-23 00:56:09.818771 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.818777 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.818782 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.818787 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.818792 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.818797 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.818802 | orchestrator | 2026-03-23 00:56:09.818807 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-23 00:56:09.818812 | orchestrator | Monday 23 March 2026 00:47:08 +0000 (0:00:00.671) 0:01:23.414 ********** 2026-03-23 00:56:09.818817 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-23 00:56:09.818822 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-23 00:56:09.818827 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-23 00:56:09.818833 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-23 00:56:09.818838 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-23 00:56:09.818843 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-23 00:56:09.818848 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-23 00:56:09.818853 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-23 00:56:09.818859 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-23 00:56:09.818864 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-23 00:56:09.818873 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-23 00:56:09.818878 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-23 00:56:09.818883 | orchestrator | 2026-03-23 00:56:09.818888 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-23 00:56:09.818893 | orchestrator | Monday 23 March 2026 00:47:09 +0000 (0:00:01.297) 0:01:24.712 ********** 2026-03-23 00:56:09.818898 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.818904 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.818909 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.818914 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.818919 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.818924 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.818929 | orchestrator | 2026-03-23 00:56:09.818934 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-23 00:56:09.818939 | orchestrator | Monday 23 March 2026 00:47:10 +0000 (0:00:01.002) 0:01:25.714 ********** 2026-03-23 00:56:09.818945 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.818950 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.818955 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.818960 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.818965 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.818970 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.818976 | orchestrator | 2026-03-23 00:56:09.818981 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-23 00:56:09.818986 | orchestrator | Monday 23 March 2026 00:47:11 +0000 (0:00:00.545) 0:01:26.260 ********** 2026-03-23 00:56:09.818991 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.818997 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819002 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819008 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819013 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819018 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819023 | orchestrator | 2026-03-23 00:56:09.819029 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-23 00:56:09.819036 | orchestrator | Monday 23 March 2026 00:47:12 +0000 (0:00:00.793) 0:01:27.053 ********** 2026-03-23 00:56:09.819040 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819043 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819046 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819049 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819052 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819055 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819058 | orchestrator | 2026-03-23 00:56:09.819061 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-23 00:56:09.819064 | orchestrator | Monday 23 March 2026 00:47:12 +0000 (0:00:00.593) 0:01:27.646 ********** 2026-03-23 00:56:09.819070 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.819074 | orchestrator | 2026-03-23 00:56:09.819090 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-23 00:56:09.819096 | orchestrator | Monday 23 March 2026 00:47:13 +0000 (0:00:01.121) 0:01:28.768 ********** 2026-03-23 00:56:09.819102 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.819107 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.819112 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.819117 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.819123 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.819128 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.819133 | orchestrator | 2026-03-23 00:56:09.819138 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-23 00:56:09.819146 | orchestrator | Monday 23 March 2026 00:48:41 +0000 (0:01:27.819) 0:02:56.588 ********** 2026-03-23 00:56:09.819152 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-23 00:56:09.819157 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-23 00:56:09.819162 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-23 00:56:09.819167 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819172 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-23 00:56:09.819177 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-23 00:56:09.819183 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-23 00:56:09.819188 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819193 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-23 00:56:09.819199 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-23 00:56:09.819204 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-23 00:56:09.819209 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819214 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-23 00:56:09.819219 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-23 00:56:09.819224 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-23 00:56:09.819229 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819234 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-23 00:56:09.819239 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-23 00:56:09.819244 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-23 00:56:09.819250 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819258 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-23 00:56:09.819264 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-23 00:56:09.819273 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-23 00:56:09.819278 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819283 | orchestrator | 2026-03-23 00:56:09.819289 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-23 00:56:09.819294 | orchestrator | Monday 23 March 2026 00:48:42 +0000 (0:00:00.744) 0:02:57.332 ********** 2026-03-23 00:56:09.819299 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819304 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819309 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819314 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819319 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819324 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819329 | orchestrator | 2026-03-23 00:56:09.819334 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-23 00:56:09.819339 | orchestrator | Monday 23 March 2026 00:48:43 +0000 (0:00:00.611) 0:02:57.944 ********** 2026-03-23 00:56:09.819344 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819350 | orchestrator | 2026-03-23 00:56:09.819355 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-23 00:56:09.819360 | orchestrator | Monday 23 March 2026 00:48:43 +0000 (0:00:00.174) 0:02:58.118 ********** 2026-03-23 00:56:09.819365 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819370 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819376 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819381 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819386 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819391 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819396 | orchestrator | 2026-03-23 00:56:09.819401 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-23 00:56:09.819406 | orchestrator | Monday 23 March 2026 00:48:43 +0000 (0:00:00.593) 0:02:58.711 ********** 2026-03-23 00:56:09.819412 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819417 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819422 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819428 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819433 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819438 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819443 | orchestrator | 2026-03-23 00:56:09.819448 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-23 00:56:09.819454 | orchestrator | Monday 23 March 2026 00:48:44 +0000 (0:00:00.840) 0:02:59.551 ********** 2026-03-23 00:56:09.819459 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819464 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819469 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819474 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819479 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819484 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819490 | orchestrator | 2026-03-23 00:56:09.819495 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-23 00:56:09.819500 | orchestrator | Monday 23 March 2026 00:48:45 +0000 (0:00:00.623) 0:03:00.175 ********** 2026-03-23 00:56:09.819505 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.819510 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.819516 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.819521 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.819526 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.819531 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.819536 | orchestrator | 2026-03-23 00:56:09.819541 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-23 00:56:09.819547 | orchestrator | Monday 23 March 2026 00:48:46 +0000 (0:00:01.513) 0:03:01.688 ********** 2026-03-23 00:56:09.819552 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.819563 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.819568 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.819574 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.819579 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.819584 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.819589 | orchestrator | 2026-03-23 00:56:09.819595 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-23 00:56:09.819600 | orchestrator | Monday 23 March 2026 00:48:47 +0000 (0:00:00.583) 0:03:02.272 ********** 2026-03-23 00:56:09.819605 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.819611 | orchestrator | 2026-03-23 00:56:09.819616 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-23 00:56:09.819621 | orchestrator | Monday 23 March 2026 00:48:48 +0000 (0:00:01.436) 0:03:03.709 ********** 2026-03-23 00:56:09.819624 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819627 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819630 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819633 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819636 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819639 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819642 | orchestrator | 2026-03-23 00:56:09.819645 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-23 00:56:09.819648 | orchestrator | Monday 23 March 2026 00:48:49 +0000 (0:00:01.013) 0:03:04.723 ********** 2026-03-23 00:56:09.819652 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819655 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819658 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819661 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819664 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819667 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819670 | orchestrator | 2026-03-23 00:56:09.819673 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-23 00:56:09.819677 | orchestrator | Monday 23 March 2026 00:48:50 +0000 (0:00:00.927) 0:03:05.651 ********** 2026-03-23 00:56:09.819680 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819683 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819689 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819692 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819695 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819698 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819702 | orchestrator | 2026-03-23 00:56:09.819705 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-23 00:56:09.819708 | orchestrator | Monday 23 March 2026 00:48:51 +0000 (0:00:00.674) 0:03:06.326 ********** 2026-03-23 00:56:09.819711 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819714 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819717 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819720 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819723 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819726 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819729 | orchestrator | 2026-03-23 00:56:09.819734 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-23 00:56:09.819738 | orchestrator | Monday 23 March 2026 00:48:52 +0000 (0:00:01.207) 0:03:07.533 ********** 2026-03-23 00:56:09.819748 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819753 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819758 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819762 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819767 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819772 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819776 | orchestrator | 2026-03-23 00:56:09.819780 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-23 00:56:09.819792 | orchestrator | Monday 23 March 2026 00:48:53 +0000 (0:00:00.861) 0:03:08.395 ********** 2026-03-23 00:56:09.819797 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819802 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819807 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819812 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819817 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819822 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819827 | orchestrator | 2026-03-23 00:56:09.819832 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-23 00:56:09.819837 | orchestrator | Monday 23 March 2026 00:48:54 +0000 (0:00:01.148) 0:03:09.544 ********** 2026-03-23 00:56:09.819841 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819845 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819850 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819854 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819858 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819863 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819867 | orchestrator | 2026-03-23 00:56:09.819872 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-23 00:56:09.819877 | orchestrator | Monday 23 March 2026 00:48:55 +0000 (0:00:00.677) 0:03:10.222 ********** 2026-03-23 00:56:09.819882 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.819887 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.819892 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.819897 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.819901 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.819906 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.819910 | orchestrator | 2026-03-23 00:56:09.819914 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-23 00:56:09.819920 | orchestrator | Monday 23 March 2026 00:48:56 +0000 (0:00:00.733) 0:03:10.956 ********** 2026-03-23 00:56:09.819924 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.819929 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.819934 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.819938 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.819943 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.819948 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.819954 | orchestrator | 2026-03-23 00:56:09.819958 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-23 00:56:09.819968 | orchestrator | Monday 23 March 2026 00:48:57 +0000 (0:00:01.001) 0:03:11.957 ********** 2026-03-23 00:56:09.819975 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.819981 | orchestrator | 2026-03-23 00:56:09.819986 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-23 00:56:09.819992 | orchestrator | Monday 23 March 2026 00:48:58 +0000 (0:00:00.943) 0:03:12.901 ********** 2026-03-23 00:56:09.819997 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-23 00:56:09.820003 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-23 00:56:09.820008 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-23 00:56:09.820013 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-23 00:56:09.820018 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-23 00:56:09.820023 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-23 00:56:09.820028 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-23 00:56:09.820033 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-23 00:56:09.820039 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-23 00:56:09.820044 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-23 00:56:09.820049 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-23 00:56:09.820060 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-23 00:56:09.820066 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-23 00:56:09.820070 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-23 00:56:09.820073 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-23 00:56:09.820076 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-23 00:56:09.820109 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-23 00:56:09.820115 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-23 00:56:09.820126 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-23 00:56:09.820131 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-23 00:56:09.820135 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-23 00:56:09.820140 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-23 00:56:09.820144 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-23 00:56:09.820149 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-23 00:56:09.820153 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-23 00:56:09.820158 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-23 00:56:09.820162 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-23 00:56:09.820167 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-23 00:56:09.820173 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-23 00:56:09.820176 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-23 00:56:09.820179 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-23 00:56:09.820182 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-23 00:56:09.820185 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-23 00:56:09.820189 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-23 00:56:09.820192 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-23 00:56:09.820195 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-23 00:56:09.820198 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-23 00:56:09.820201 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-23 00:56:09.820204 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-23 00:56:09.820207 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-23 00:56:09.820211 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-23 00:56:09.820214 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-23 00:56:09.820217 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-23 00:56:09.820220 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-23 00:56:09.820223 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-23 00:56:09.820226 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-23 00:56:09.820229 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-23 00:56:09.820232 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-23 00:56:09.820235 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-23 00:56:09.820238 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-23 00:56:09.820241 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-23 00:56:09.820245 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-23 00:56:09.820248 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-23 00:56:09.820255 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-23 00:56:09.820258 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-23 00:56:09.820261 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-23 00:56:09.820267 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-23 00:56:09.820270 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-23 00:56:09.820273 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-23 00:56:09.820276 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-23 00:56:09.820279 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-23 00:56:09.820282 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-23 00:56:09.820286 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-23 00:56:09.820289 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-23 00:56:09.820292 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-23 00:56:09.820295 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-23 00:56:09.820298 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-23 00:56:09.820301 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-23 00:56:09.820304 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-23 00:56:09.820307 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-23 00:56:09.820310 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-23 00:56:09.820313 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-23 00:56:09.820316 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-23 00:56:09.820319 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-23 00:56:09.820322 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-23 00:56:09.820326 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-23 00:56:09.820331 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-23 00:56:09.820334 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-23 00:56:09.820338 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-23 00:56:09.820341 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-23 00:56:09.820344 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-23 00:56:09.820347 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-23 00:56:09.820350 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-23 00:56:09.820353 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-23 00:56:09.820356 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-23 00:56:09.820359 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-23 00:56:09.820363 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-23 00:56:09.820366 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-23 00:56:09.820369 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-23 00:56:09.820372 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-23 00:56:09.820375 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-23 00:56:09.820378 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-23 00:56:09.820381 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-23 00:56:09.820389 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-23 00:56:09.820392 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-23 00:56:09.820395 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-23 00:56:09.820399 | orchestrator | 2026-03-23 00:56:09.820402 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-23 00:56:09.820405 | orchestrator | Monday 23 March 2026 00:49:03 +0000 (0:00:05.449) 0:03:18.350 ********** 2026-03-23 00:56:09.820408 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820411 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820414 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820418 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.820421 | orchestrator | 2026-03-23 00:56:09.820424 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-23 00:56:09.820427 | orchestrator | Monday 23 March 2026 00:49:04 +0000 (0:00:01.119) 0:03:19.470 ********** 2026-03-23 00:56:09.820430 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.820434 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.820437 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.820440 | orchestrator | 2026-03-23 00:56:09.820443 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-23 00:56:09.820446 | orchestrator | Monday 23 March 2026 00:49:05 +0000 (0:00:00.656) 0:03:20.127 ********** 2026-03-23 00:56:09.820452 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.820455 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.820458 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.820461 | orchestrator | 2026-03-23 00:56:09.820464 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-23 00:56:09.820468 | orchestrator | Monday 23 March 2026 00:49:06 +0000 (0:00:01.233) 0:03:21.361 ********** 2026-03-23 00:56:09.820471 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.820474 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.820477 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.820480 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820483 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820486 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820489 | orchestrator | 2026-03-23 00:56:09.820492 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-23 00:56:09.820496 | orchestrator | Monday 23 March 2026 00:49:07 +0000 (0:00:00.552) 0:03:21.913 ********** 2026-03-23 00:56:09.820499 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.820502 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.820505 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.820508 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820511 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820514 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820517 | orchestrator | 2026-03-23 00:56:09.820520 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-23 00:56:09.820523 | orchestrator | Monday 23 March 2026 00:49:07 +0000 (0:00:00.663) 0:03:22.577 ********** 2026-03-23 00:56:09.820526 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.820530 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.820533 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.820539 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820542 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820545 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820548 | orchestrator | 2026-03-23 00:56:09.820553 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-23 00:56:09.820557 | orchestrator | Monday 23 March 2026 00:49:08 +0000 (0:00:00.516) 0:03:23.093 ********** 2026-03-23 00:56:09.820560 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.820563 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.820566 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.820569 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820572 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820575 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820578 | orchestrator | 2026-03-23 00:56:09.820582 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-23 00:56:09.820585 | orchestrator | Monday 23 March 2026 00:49:08 +0000 (0:00:00.623) 0:03:23.717 ********** 2026-03-23 00:56:09.820588 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.820591 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.820594 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.820597 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820600 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820603 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820606 | orchestrator | 2026-03-23 00:56:09.820610 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-23 00:56:09.820613 | orchestrator | Monday 23 March 2026 00:49:09 +0000 (0:00:00.492) 0:03:24.209 ********** 2026-03-23 00:56:09.820616 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.820619 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.820622 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.820625 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820628 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820631 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820634 | orchestrator | 2026-03-23 00:56:09.820638 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-23 00:56:09.820641 | orchestrator | Monday 23 March 2026 00:49:09 +0000 (0:00:00.501) 0:03:24.711 ********** 2026-03-23 00:56:09.820644 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.820647 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.820651 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.820654 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820657 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820660 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820663 | orchestrator | 2026-03-23 00:56:09.820666 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-23 00:56:09.820669 | orchestrator | Monday 23 March 2026 00:49:10 +0000 (0:00:00.814) 0:03:25.525 ********** 2026-03-23 00:56:09.820673 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.820676 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.820679 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.820682 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820685 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820688 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820691 | orchestrator | 2026-03-23 00:56:09.820694 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-23 00:56:09.820697 | orchestrator | Monday 23 March 2026 00:49:11 +0000 (0:00:00.519) 0:03:26.045 ********** 2026-03-23 00:56:09.820700 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820704 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820707 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820710 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.820713 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.820718 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.820721 | orchestrator | 2026-03-23 00:56:09.820724 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-23 00:56:09.820727 | orchestrator | Monday 23 March 2026 00:49:13 +0000 (0:00:02.416) 0:03:28.462 ********** 2026-03-23 00:56:09.820731 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.820736 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.820739 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.820742 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820745 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820748 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820752 | orchestrator | 2026-03-23 00:56:09.820755 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-23 00:56:09.820758 | orchestrator | Monday 23 March 2026 00:49:14 +0000 (0:00:00.517) 0:03:28.980 ********** 2026-03-23 00:56:09.820761 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.820764 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.820767 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.820771 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820774 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820777 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820780 | orchestrator | 2026-03-23 00:56:09.820783 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-23 00:56:09.820786 | orchestrator | Monday 23 March 2026 00:49:14 +0000 (0:00:00.754) 0:03:29.735 ********** 2026-03-23 00:56:09.820789 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.820792 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.820795 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.820798 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820801 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820805 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820808 | orchestrator | 2026-03-23 00:56:09.820811 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-23 00:56:09.820814 | orchestrator | Monday 23 March 2026 00:49:15 +0000 (0:00:00.542) 0:03:30.277 ********** 2026-03-23 00:56:09.820817 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.820820 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.820823 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.820827 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820832 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820835 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820838 | orchestrator | 2026-03-23 00:56:09.820841 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-23 00:56:09.820844 | orchestrator | Monday 23 March 2026 00:49:16 +0000 (0:00:00.747) 0:03:31.025 ********** 2026-03-23 00:56:09.820849 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-23 00:56:09.820853 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-23 00:56:09.820857 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.820860 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-23 00:56:09.820865 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-23 00:56:09.820868 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.820871 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-23 00:56:09.820875 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-23 00:56:09.820878 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.820881 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820884 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820887 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820890 | orchestrator | 2026-03-23 00:56:09.820893 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-23 00:56:09.820896 | orchestrator | Monday 23 March 2026 00:49:16 +0000 (0:00:00.577) 0:03:31.602 ********** 2026-03-23 00:56:09.820900 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.820903 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.820906 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.820909 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820912 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820915 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820918 | orchestrator | 2026-03-23 00:56:09.820921 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-23 00:56:09.820924 | orchestrator | Monday 23 March 2026 00:49:17 +0000 (0:00:00.713) 0:03:32.316 ********** 2026-03-23 00:56:09.820927 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.820930 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.820934 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.820937 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820940 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820943 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820946 | orchestrator | 2026-03-23 00:56:09.820949 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-23 00:56:09.820952 | orchestrator | Monday 23 March 2026 00:49:18 +0000 (0:00:00.533) 0:03:32.849 ********** 2026-03-23 00:56:09.820955 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.820959 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.820962 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.820965 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.820968 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.820971 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.820974 | orchestrator | 2026-03-23 00:56:09.820977 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-23 00:56:09.820980 | orchestrator | Monday 23 March 2026 00:49:18 +0000 (0:00:00.739) 0:03:33.589 ********** 2026-03-23 00:56:09.820983 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.820986 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.820989 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.820993 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.821002 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.821007 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.821012 | orchestrator | 2026-03-23 00:56:09.821016 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-23 00:56:09.821025 | orchestrator | Monday 23 March 2026 00:49:19 +0000 (0:00:00.524) 0:03:34.114 ********** 2026-03-23 00:56:09.821030 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821034 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.821040 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.821044 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.821050 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.821055 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.821060 | orchestrator | 2026-03-23 00:56:09.821066 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-23 00:56:09.821072 | orchestrator | Monday 23 March 2026 00:49:20 +0000 (0:00:00.665) 0:03:34.780 ********** 2026-03-23 00:56:09.821077 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.821092 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.821098 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.821103 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.821108 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.821113 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.821118 | orchestrator | 2026-03-23 00:56:09.821123 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-23 00:56:09.821181 | orchestrator | Monday 23 March 2026 00:49:20 +0000 (0:00:00.766) 0:03:35.547 ********** 2026-03-23 00:56:09.821187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.821190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.821193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.821196 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821199 | orchestrator | 2026-03-23 00:56:09.821202 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-23 00:56:09.821206 | orchestrator | Monday 23 March 2026 00:49:21 +0000 (0:00:00.536) 0:03:36.083 ********** 2026-03-23 00:56:09.821209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.821212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.821215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.821218 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821221 | orchestrator | 2026-03-23 00:56:09.821224 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-23 00:56:09.821227 | orchestrator | Monday 23 March 2026 00:49:21 +0000 (0:00:00.533) 0:03:36.617 ********** 2026-03-23 00:56:09.821231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.821234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.821237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.821240 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821243 | orchestrator | 2026-03-23 00:56:09.821246 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-23 00:56:09.821249 | orchestrator | Monday 23 March 2026 00:49:22 +0000 (0:00:00.632) 0:03:37.249 ********** 2026-03-23 00:56:09.821252 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.821255 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.821258 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.821261 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.821264 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.821267 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.821270 | orchestrator | 2026-03-23 00:56:09.821273 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-23 00:56:09.821276 | orchestrator | Monday 23 March 2026 00:49:23 +0000 (0:00:00.544) 0:03:37.794 ********** 2026-03-23 00:56:09.821283 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-23 00:56:09.821287 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-23 00:56:09.821290 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-23 00:56:09.821295 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-23 00:56:09.821304 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.821311 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-23 00:56:09.821317 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.821321 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-23 00:56:09.821326 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.821331 | orchestrator | 2026-03-23 00:56:09.821335 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-23 00:56:09.821340 | orchestrator | Monday 23 March 2026 00:49:24 +0000 (0:00:01.675) 0:03:39.470 ********** 2026-03-23 00:56:09.821345 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.821349 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.821354 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.821358 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.821363 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.821369 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.821374 | orchestrator | 2026-03-23 00:56:09.821379 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-23 00:56:09.821384 | orchestrator | Monday 23 March 2026 00:49:27 +0000 (0:00:02.366) 0:03:41.836 ********** 2026-03-23 00:56:09.821389 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.821394 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.821400 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.821403 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.821406 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.821409 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.821412 | orchestrator | 2026-03-23 00:56:09.821415 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-23 00:56:09.821418 | orchestrator | Monday 23 March 2026 00:49:28 +0000 (0:00:01.182) 0:03:43.019 ********** 2026-03-23 00:56:09.821421 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821425 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.821428 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.821431 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.821434 | orchestrator | 2026-03-23 00:56:09.821437 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-23 00:56:09.821445 | orchestrator | Monday 23 March 2026 00:49:29 +0000 (0:00:00.776) 0:03:43.796 ********** 2026-03-23 00:56:09.821448 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.821452 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.821455 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.821458 | orchestrator | 2026-03-23 00:56:09.821461 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-23 00:56:09.821464 | orchestrator | Monday 23 March 2026 00:49:29 +0000 (0:00:00.239) 0:03:44.035 ********** 2026-03-23 00:56:09.821467 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.821470 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.821473 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.821476 | orchestrator | 2026-03-23 00:56:09.821479 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-23 00:56:09.821483 | orchestrator | Monday 23 March 2026 00:49:30 +0000 (0:00:01.185) 0:03:45.221 ********** 2026-03-23 00:56:09.821486 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-23 00:56:09.821489 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-23 00:56:09.821492 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-23 00:56:09.821495 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.821498 | orchestrator | 2026-03-23 00:56:09.821507 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-23 00:56:09.821510 | orchestrator | Monday 23 March 2026 00:49:31 +0000 (0:00:00.698) 0:03:45.920 ********** 2026-03-23 00:56:09.821513 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.821516 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.821519 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.821522 | orchestrator | 2026-03-23 00:56:09.821526 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-23 00:56:09.821529 | orchestrator | Monday 23 March 2026 00:49:31 +0000 (0:00:00.261) 0:03:46.181 ********** 2026-03-23 00:56:09.821532 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.821535 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.821538 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.821541 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.821544 | orchestrator | 2026-03-23 00:56:09.821547 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-23 00:56:09.821551 | orchestrator | Monday 23 March 2026 00:49:32 +0000 (0:00:00.911) 0:03:47.092 ********** 2026-03-23 00:56:09.821554 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.821557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.821560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.821563 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821566 | orchestrator | 2026-03-23 00:56:09.821569 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-23 00:56:09.821572 | orchestrator | Monday 23 March 2026 00:49:32 +0000 (0:00:00.359) 0:03:47.452 ********** 2026-03-23 00:56:09.821575 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821578 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.821581 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.821584 | orchestrator | 2026-03-23 00:56:09.821588 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-23 00:56:09.821591 | orchestrator | Monday 23 March 2026 00:49:33 +0000 (0:00:00.400) 0:03:47.853 ********** 2026-03-23 00:56:09.821594 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821597 | orchestrator | 2026-03-23 00:56:09.821600 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-23 00:56:09.821603 | orchestrator | Monday 23 March 2026 00:49:33 +0000 (0:00:00.203) 0:03:48.056 ********** 2026-03-23 00:56:09.821606 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821609 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.821614 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.821617 | orchestrator | 2026-03-23 00:56:09.821620 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-23 00:56:09.821623 | orchestrator | Monday 23 March 2026 00:49:33 +0000 (0:00:00.284) 0:03:48.340 ********** 2026-03-23 00:56:09.821626 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821629 | orchestrator | 2026-03-23 00:56:09.821633 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-23 00:56:09.821636 | orchestrator | Monday 23 March 2026 00:49:33 +0000 (0:00:00.200) 0:03:48.541 ********** 2026-03-23 00:56:09.821639 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821642 | orchestrator | 2026-03-23 00:56:09.821645 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-23 00:56:09.821648 | orchestrator | Monday 23 March 2026 00:49:33 +0000 (0:00:00.178) 0:03:48.719 ********** 2026-03-23 00:56:09.821651 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821654 | orchestrator | 2026-03-23 00:56:09.821657 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-23 00:56:09.821661 | orchestrator | Monday 23 March 2026 00:49:34 +0000 (0:00:00.103) 0:03:48.822 ********** 2026-03-23 00:56:09.821664 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821667 | orchestrator | 2026-03-23 00:56:09.821672 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-23 00:56:09.821675 | orchestrator | Monday 23 March 2026 00:49:34 +0000 (0:00:00.182) 0:03:49.004 ********** 2026-03-23 00:56:09.821678 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821681 | orchestrator | 2026-03-23 00:56:09.821685 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-23 00:56:09.821688 | orchestrator | Monday 23 March 2026 00:49:34 +0000 (0:00:00.203) 0:03:49.208 ********** 2026-03-23 00:56:09.821691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.821694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.821697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.821700 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821703 | orchestrator | 2026-03-23 00:56:09.821706 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-23 00:56:09.821711 | orchestrator | Monday 23 March 2026 00:49:35 +0000 (0:00:00.603) 0:03:49.812 ********** 2026-03-23 00:56:09.821715 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821718 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.821721 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.821724 | orchestrator | 2026-03-23 00:56:09.821727 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-23 00:56:09.821730 | orchestrator | Monday 23 March 2026 00:49:35 +0000 (0:00:00.441) 0:03:50.253 ********** 2026-03-23 00:56:09.821734 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821737 | orchestrator | 2026-03-23 00:56:09.821740 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-23 00:56:09.821743 | orchestrator | Monday 23 March 2026 00:49:35 +0000 (0:00:00.187) 0:03:50.441 ********** 2026-03-23 00:56:09.821746 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821749 | orchestrator | 2026-03-23 00:56:09.821752 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-23 00:56:09.821755 | orchestrator | Monday 23 March 2026 00:49:35 +0000 (0:00:00.198) 0:03:50.640 ********** 2026-03-23 00:56:09.821759 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.821762 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.821765 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.821768 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.821771 | orchestrator | 2026-03-23 00:56:09.821774 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-23 00:56:09.821777 | orchestrator | Monday 23 March 2026 00:49:36 +0000 (0:00:00.799) 0:03:51.439 ********** 2026-03-23 00:56:09.821780 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.821784 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.821787 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.821790 | orchestrator | 2026-03-23 00:56:09.821793 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-23 00:56:09.821796 | orchestrator | Monday 23 March 2026 00:49:36 +0000 (0:00:00.315) 0:03:51.754 ********** 2026-03-23 00:56:09.821799 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.821802 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.821806 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.821809 | orchestrator | 2026-03-23 00:56:09.821812 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-23 00:56:09.821815 | orchestrator | Monday 23 March 2026 00:49:38 +0000 (0:00:01.237) 0:03:52.992 ********** 2026-03-23 00:56:09.821818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.821821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.821824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.821827 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821830 | orchestrator | 2026-03-23 00:56:09.821835 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-23 00:56:09.821839 | orchestrator | Monday 23 March 2026 00:49:38 +0000 (0:00:00.687) 0:03:53.679 ********** 2026-03-23 00:56:09.821842 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.821845 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.821848 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.821851 | orchestrator | 2026-03-23 00:56:09.821854 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-23 00:56:09.821857 | orchestrator | Monday 23 March 2026 00:49:39 +0000 (0:00:00.285) 0:03:53.965 ********** 2026-03-23 00:56:09.821860 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.821863 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.821866 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.821870 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.821873 | orchestrator | 2026-03-23 00:56:09.821878 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-23 00:56:09.821881 | orchestrator | Monday 23 March 2026 00:49:40 +0000 (0:00:00.875) 0:03:54.840 ********** 2026-03-23 00:56:09.821884 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.821887 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.821890 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.821893 | orchestrator | 2026-03-23 00:56:09.821896 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-23 00:56:09.821899 | orchestrator | Monday 23 March 2026 00:49:40 +0000 (0:00:00.270) 0:03:55.111 ********** 2026-03-23 00:56:09.821903 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.821906 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.821909 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.821912 | orchestrator | 2026-03-23 00:56:09.821915 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-23 00:56:09.821918 | orchestrator | Monday 23 March 2026 00:49:41 +0000 (0:00:01.348) 0:03:56.460 ********** 2026-03-23 00:56:09.821921 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.821924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.821927 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.821930 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821933 | orchestrator | 2026-03-23 00:56:09.821936 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-23 00:56:09.821940 | orchestrator | Monday 23 March 2026 00:49:42 +0000 (0:00:00.524) 0:03:56.985 ********** 2026-03-23 00:56:09.821943 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.821946 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.821949 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.821952 | orchestrator | 2026-03-23 00:56:09.821955 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-23 00:56:09.821958 | orchestrator | Monday 23 March 2026 00:49:42 +0000 (0:00:00.300) 0:03:57.285 ********** 2026-03-23 00:56:09.821961 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821964 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.821967 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.821970 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.821973 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.821979 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.821982 | orchestrator | 2026-03-23 00:56:09.821985 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-23 00:56:09.821988 | orchestrator | Monday 23 March 2026 00:49:43 +0000 (0:00:00.504) 0:03:57.790 ********** 2026-03-23 00:56:09.821991 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.821994 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.821997 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.822001 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-23 00:56:09.822006 | orchestrator | 2026-03-23 00:56:09.822009 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-23 00:56:09.822012 | orchestrator | Monday 23 March 2026 00:49:44 +0000 (0:00:00.999) 0:03:58.789 ********** 2026-03-23 00:56:09.822116 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822120 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822123 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822126 | orchestrator | 2026-03-23 00:56:09.822129 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-23 00:56:09.822132 | orchestrator | Monday 23 March 2026 00:49:44 +0000 (0:00:00.337) 0:03:59.127 ********** 2026-03-23 00:56:09.822135 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.822138 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.822142 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.822145 | orchestrator | 2026-03-23 00:56:09.822148 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-23 00:56:09.822151 | orchestrator | Monday 23 March 2026 00:49:45 +0000 (0:00:01.423) 0:04:00.551 ********** 2026-03-23 00:56:09.822154 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-23 00:56:09.822157 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-23 00:56:09.822160 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-23 00:56:09.822163 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822166 | orchestrator | 2026-03-23 00:56:09.822169 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-23 00:56:09.822172 | orchestrator | Monday 23 March 2026 00:49:46 +0000 (0:00:00.787) 0:04:01.338 ********** 2026-03-23 00:56:09.822176 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822179 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822182 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822185 | orchestrator | 2026-03-23 00:56:09.822188 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-23 00:56:09.822192 | orchestrator | 2026-03-23 00:56:09.822195 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-23 00:56:09.822198 | orchestrator | Monday 23 March 2026 00:49:47 +0000 (0:00:00.548) 0:04:01.887 ********** 2026-03-23 00:56:09.822201 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.822204 | orchestrator | 2026-03-23 00:56:09.822208 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-23 00:56:09.822211 | orchestrator | Monday 23 March 2026 00:49:47 +0000 (0:00:00.619) 0:04:02.507 ********** 2026-03-23 00:56:09.822214 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.822217 | orchestrator | 2026-03-23 00:56:09.822220 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-23 00:56:09.822223 | orchestrator | Monday 23 March 2026 00:49:48 +0000 (0:00:00.579) 0:04:03.086 ********** 2026-03-23 00:56:09.822226 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822229 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822232 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822235 | orchestrator | 2026-03-23 00:56:09.822241 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-23 00:56:09.822244 | orchestrator | Monday 23 March 2026 00:49:49 +0000 (0:00:00.794) 0:04:03.881 ********** 2026-03-23 00:56:09.822247 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822251 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.822254 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.822257 | orchestrator | 2026-03-23 00:56:09.822260 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-23 00:56:09.822263 | orchestrator | Monday 23 March 2026 00:49:49 +0000 (0:00:00.505) 0:04:04.386 ********** 2026-03-23 00:56:09.822266 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822272 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.822275 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.822278 | orchestrator | 2026-03-23 00:56:09.822281 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-23 00:56:09.822284 | orchestrator | Monday 23 March 2026 00:49:49 +0000 (0:00:00.262) 0:04:04.649 ********** 2026-03-23 00:56:09.822287 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822290 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.822293 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.822296 | orchestrator | 2026-03-23 00:56:09.822299 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-23 00:56:09.822303 | orchestrator | Monday 23 March 2026 00:49:50 +0000 (0:00:00.353) 0:04:05.003 ********** 2026-03-23 00:56:09.822306 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822309 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822312 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822315 | orchestrator | 2026-03-23 00:56:09.822318 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-23 00:56:09.822321 | orchestrator | Monday 23 March 2026 00:49:50 +0000 (0:00:00.654) 0:04:05.657 ********** 2026-03-23 00:56:09.822324 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822328 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.822331 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.822334 | orchestrator | 2026-03-23 00:56:09.822337 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-23 00:56:09.822340 | orchestrator | Monday 23 March 2026 00:49:51 +0000 (0:00:00.265) 0:04:05.922 ********** 2026-03-23 00:56:09.822356 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822360 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.822363 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.822366 | orchestrator | 2026-03-23 00:56:09.822369 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-23 00:56:09.822372 | orchestrator | Monday 23 March 2026 00:49:51 +0000 (0:00:00.429) 0:04:06.352 ********** 2026-03-23 00:56:09.822375 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822378 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822381 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822384 | orchestrator | 2026-03-23 00:56:09.822387 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-23 00:56:09.822391 | orchestrator | Monday 23 March 2026 00:49:52 +0000 (0:00:00.599) 0:04:06.951 ********** 2026-03-23 00:56:09.822394 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822397 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822400 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822403 | orchestrator | 2026-03-23 00:56:09.822406 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-23 00:56:09.822409 | orchestrator | Monday 23 March 2026 00:49:52 +0000 (0:00:00.718) 0:04:07.670 ********** 2026-03-23 00:56:09.822412 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822415 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.822419 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.822422 | orchestrator | 2026-03-23 00:56:09.822425 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-23 00:56:09.822428 | orchestrator | Monday 23 March 2026 00:49:53 +0000 (0:00:00.274) 0:04:07.944 ********** 2026-03-23 00:56:09.822431 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822434 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822437 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822440 | orchestrator | 2026-03-23 00:56:09.822443 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-23 00:56:09.822446 | orchestrator | Monday 23 March 2026 00:49:53 +0000 (0:00:00.530) 0:04:08.474 ********** 2026-03-23 00:56:09.822449 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822453 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.822458 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.822461 | orchestrator | 2026-03-23 00:56:09.822464 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-23 00:56:09.822467 | orchestrator | Monday 23 March 2026 00:49:54 +0000 (0:00:00.330) 0:04:08.804 ********** 2026-03-23 00:56:09.822470 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822473 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.822476 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.822480 | orchestrator | 2026-03-23 00:56:09.822483 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-23 00:56:09.822486 | orchestrator | Monday 23 March 2026 00:49:54 +0000 (0:00:00.348) 0:04:09.153 ********** 2026-03-23 00:56:09.822489 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822492 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.822495 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.822498 | orchestrator | 2026-03-23 00:56:09.822501 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-23 00:56:09.822504 | orchestrator | Monday 23 March 2026 00:49:54 +0000 (0:00:00.309) 0:04:09.462 ********** 2026-03-23 00:56:09.822508 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822511 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.822514 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.822517 | orchestrator | 2026-03-23 00:56:09.822520 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-23 00:56:09.822523 | orchestrator | Monday 23 March 2026 00:49:55 +0000 (0:00:00.426) 0:04:09.889 ********** 2026-03-23 00:56:09.822526 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822529 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.822532 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.822535 | orchestrator | 2026-03-23 00:56:09.822547 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-23 00:56:09.822551 | orchestrator | Monday 23 March 2026 00:49:55 +0000 (0:00:00.282) 0:04:10.172 ********** 2026-03-23 00:56:09.822554 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822557 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822560 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822563 | orchestrator | 2026-03-23 00:56:09.822566 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-23 00:56:09.822569 | orchestrator | Monday 23 March 2026 00:49:55 +0000 (0:00:00.280) 0:04:10.452 ********** 2026-03-23 00:56:09.822572 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822576 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822579 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822582 | orchestrator | 2026-03-23 00:56:09.822585 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-23 00:56:09.822588 | orchestrator | Monday 23 March 2026 00:49:56 +0000 (0:00:00.325) 0:04:10.778 ********** 2026-03-23 00:56:09.822591 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822594 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822597 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822600 | orchestrator | 2026-03-23 00:56:09.822603 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-23 00:56:09.822606 | orchestrator | Monday 23 March 2026 00:49:56 +0000 (0:00:00.668) 0:04:11.446 ********** 2026-03-23 00:56:09.822610 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822613 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822616 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822619 | orchestrator | 2026-03-23 00:56:09.822622 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-23 00:56:09.822625 | orchestrator | Monday 23 March 2026 00:49:57 +0000 (0:00:00.351) 0:04:11.797 ********** 2026-03-23 00:56:09.822628 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.822631 | orchestrator | 2026-03-23 00:56:09.822634 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-23 00:56:09.822639 | orchestrator | Monday 23 March 2026 00:49:57 +0000 (0:00:00.545) 0:04:12.343 ********** 2026-03-23 00:56:09.822643 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822646 | orchestrator | 2026-03-23 00:56:09.822658 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-23 00:56:09.822662 | orchestrator | Monday 23 March 2026 00:49:57 +0000 (0:00:00.286) 0:04:12.629 ********** 2026-03-23 00:56:09.822665 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-23 00:56:09.822668 | orchestrator | 2026-03-23 00:56:09.822671 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-23 00:56:09.822675 | orchestrator | Monday 23 March 2026 00:49:58 +0000 (0:00:00.945) 0:04:13.575 ********** 2026-03-23 00:56:09.822678 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822681 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822684 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822687 | orchestrator | 2026-03-23 00:56:09.822690 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-23 00:56:09.822693 | orchestrator | Monday 23 March 2026 00:49:59 +0000 (0:00:00.383) 0:04:13.959 ********** 2026-03-23 00:56:09.822696 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822699 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822703 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822706 | orchestrator | 2026-03-23 00:56:09.822709 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-23 00:56:09.822712 | orchestrator | Monday 23 March 2026 00:49:59 +0000 (0:00:00.294) 0:04:14.253 ********** 2026-03-23 00:56:09.822715 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.822718 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.822721 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.822724 | orchestrator | 2026-03-23 00:56:09.822727 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-23 00:56:09.822730 | orchestrator | Monday 23 March 2026 00:50:01 +0000 (0:00:01.531) 0:04:15.785 ********** 2026-03-23 00:56:09.822733 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.822736 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.822739 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.822743 | orchestrator | 2026-03-23 00:56:09.822746 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-23 00:56:09.822749 | orchestrator | Monday 23 March 2026 00:50:02 +0000 (0:00:01.286) 0:04:17.071 ********** 2026-03-23 00:56:09.822752 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.822755 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.822758 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.822761 | orchestrator | 2026-03-23 00:56:09.822764 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-23 00:56:09.822767 | orchestrator | Monday 23 March 2026 00:50:03 +0000 (0:00:01.029) 0:04:18.100 ********** 2026-03-23 00:56:09.822770 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822773 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822776 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822779 | orchestrator | 2026-03-23 00:56:09.822782 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-23 00:56:09.822785 | orchestrator | Monday 23 March 2026 00:50:04 +0000 (0:00:01.523) 0:04:19.624 ********** 2026-03-23 00:56:09.822789 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.822792 | orchestrator | 2026-03-23 00:56:09.822795 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-23 00:56:09.822798 | orchestrator | Monday 23 March 2026 00:50:06 +0000 (0:00:01.412) 0:04:21.037 ********** 2026-03-23 00:56:09.822801 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822804 | orchestrator | 2026-03-23 00:56:09.822807 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-23 00:56:09.822810 | orchestrator | Monday 23 March 2026 00:50:06 +0000 (0:00:00.573) 0:04:21.610 ********** 2026-03-23 00:56:09.822816 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:56:09.822819 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:56:09.822822 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-23 00:56:09.822827 | orchestrator | changed: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-23 00:56:09.822830 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-23 00:56:09.822833 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-23 00:56:09.822836 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-23 00:56:09.822840 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-03-23 00:56:09.822843 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-23 00:56:09.822846 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-23 00:56:09.822849 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-23 00:56:09.822852 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-23 00:56:09.822855 | orchestrator | 2026-03-23 00:56:09.822858 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-23 00:56:09.822861 | orchestrator | Monday 23 March 2026 00:50:10 +0000 (0:00:03.709) 0:04:25.320 ********** 2026-03-23 00:56:09.822864 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.822867 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.822871 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.822874 | orchestrator | 2026-03-23 00:56:09.822877 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-23 00:56:09.822880 | orchestrator | Monday 23 March 2026 00:50:12 +0000 (0:00:01.721) 0:04:27.042 ********** 2026-03-23 00:56:09.822883 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822886 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822889 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822892 | orchestrator | 2026-03-23 00:56:09.822895 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-23 00:56:09.822899 | orchestrator | Monday 23 March 2026 00:50:12 +0000 (0:00:00.365) 0:04:27.407 ********** 2026-03-23 00:56:09.822902 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.822905 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.822908 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.822911 | orchestrator | 2026-03-23 00:56:09.822914 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-23 00:56:09.822917 | orchestrator | Monday 23 March 2026 00:50:12 +0000 (0:00:00.298) 0:04:27.706 ********** 2026-03-23 00:56:09.822920 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.822933 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.822936 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.822939 | orchestrator | 2026-03-23 00:56:09.822943 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-23 00:56:09.822946 | orchestrator | Monday 23 March 2026 00:50:14 +0000 (0:00:01.905) 0:04:29.611 ********** 2026-03-23 00:56:09.822949 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.822952 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.822955 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.822958 | orchestrator | 2026-03-23 00:56:09.822961 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-23 00:56:09.822964 | orchestrator | Monday 23 March 2026 00:50:16 +0000 (0:00:01.361) 0:04:30.972 ********** 2026-03-23 00:56:09.822967 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.822971 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.822974 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.822977 | orchestrator | 2026-03-23 00:56:09.822980 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-23 00:56:09.822983 | orchestrator | Monday 23 March 2026 00:50:16 +0000 (0:00:00.294) 0:04:31.267 ********** 2026-03-23 00:56:09.822986 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.822992 | orchestrator | 2026-03-23 00:56:09.822995 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-23 00:56:09.822998 | orchestrator | Monday 23 March 2026 00:50:17 +0000 (0:00:00.679) 0:04:31.946 ********** 2026-03-23 00:56:09.823001 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823004 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823007 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823010 | orchestrator | 2026-03-23 00:56:09.823013 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-23 00:56:09.823016 | orchestrator | Monday 23 March 2026 00:50:17 +0000 (0:00:00.399) 0:04:32.346 ********** 2026-03-23 00:56:09.823020 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823023 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823026 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823029 | orchestrator | 2026-03-23 00:56:09.823032 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-23 00:56:09.823035 | orchestrator | Monday 23 March 2026 00:50:17 +0000 (0:00:00.273) 0:04:32.620 ********** 2026-03-23 00:56:09.823038 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.823041 | orchestrator | 2026-03-23 00:56:09.823044 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-23 00:56:09.823047 | orchestrator | Monday 23 March 2026 00:50:18 +0000 (0:00:00.445) 0:04:33.065 ********** 2026-03-23 00:56:09.823051 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.823054 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.823057 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.823060 | orchestrator | 2026-03-23 00:56:09.823063 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-23 00:56:09.823066 | orchestrator | Monday 23 March 2026 00:50:21 +0000 (0:00:02.900) 0:04:35.966 ********** 2026-03-23 00:56:09.823069 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.823072 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.823075 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.823078 | orchestrator | 2026-03-23 00:56:09.823107 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-23 00:56:09.823110 | orchestrator | Monday 23 March 2026 00:50:22 +0000 (0:00:01.205) 0:04:37.171 ********** 2026-03-23 00:56:09.823113 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.823116 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.823119 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.823123 | orchestrator | 2026-03-23 00:56:09.823128 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-23 00:56:09.823131 | orchestrator | Monday 23 March 2026 00:50:24 +0000 (0:00:01.801) 0:04:38.972 ********** 2026-03-23 00:56:09.823134 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.823137 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.823140 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.823143 | orchestrator | 2026-03-23 00:56:09.823146 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-23 00:56:09.823149 | orchestrator | Monday 23 March 2026 00:50:26 +0000 (0:00:01.971) 0:04:40.944 ********** 2026-03-23 00:56:09.823152 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.823155 | orchestrator | 2026-03-23 00:56:09.823158 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-23 00:56:09.823162 | orchestrator | Monday 23 March 2026 00:50:26 +0000 (0:00:00.622) 0:04:41.566 ********** 2026-03-23 00:56:09.823165 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-23 00:56:09.823168 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.823171 | orchestrator | 2026-03-23 00:56:09.823174 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-23 00:56:09.823181 | orchestrator | Monday 23 March 2026 00:50:48 +0000 (0:00:21.498) 0:05:03.065 ********** 2026-03-23 00:56:09.823184 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.823187 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.823190 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.823193 | orchestrator | 2026-03-23 00:56:09.823196 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-23 00:56:09.823199 | orchestrator | Monday 23 March 2026 00:50:54 +0000 (0:00:06.239) 0:05:09.304 ********** 2026-03-23 00:56:09.823202 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823206 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823209 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823212 | orchestrator | 2026-03-23 00:56:09.823215 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-23 00:56:09.823229 | orchestrator | Monday 23 March 2026 00:50:54 +0000 (0:00:00.273) 0:05:09.577 ********** 2026-03-23 00:56:09.823234 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8ba9da0311d7ecbb1991ebe4c7564eed0720c7fb'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-23 00:56:09.823238 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8ba9da0311d7ecbb1991ebe4c7564eed0720c7fb'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-23 00:56:09.823242 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8ba9da0311d7ecbb1991ebe4c7564eed0720c7fb'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-23 00:56:09.823246 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8ba9da0311d7ecbb1991ebe4c7564eed0720c7fb'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-23 00:56:09.823250 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8ba9da0311d7ecbb1991ebe4c7564eed0720c7fb'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-23 00:56:09.823253 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8ba9da0311d7ecbb1991ebe4c7564eed0720c7fb'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__8ba9da0311d7ecbb1991ebe4c7564eed0720c7fb'}])  2026-03-23 00:56:09.823257 | orchestrator | 2026-03-23 00:56:09.823260 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-23 00:56:09.823263 | orchestrator | Monday 23 March 2026 00:51:05 +0000 (0:00:10.249) 0:05:19.826 ********** 2026-03-23 00:56:09.823268 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823271 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823274 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823277 | orchestrator | 2026-03-23 00:56:09.823280 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-23 00:56:09.823286 | orchestrator | Monday 23 March 2026 00:51:05 +0000 (0:00:00.283) 0:05:20.110 ********** 2026-03-23 00:56:09.823289 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.823292 | orchestrator | 2026-03-23 00:56:09.823295 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-23 00:56:09.823298 | orchestrator | Monday 23 March 2026 00:51:05 +0000 (0:00:00.617) 0:05:20.728 ********** 2026-03-23 00:56:09.823302 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.823305 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.823308 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.823311 | orchestrator | 2026-03-23 00:56:09.823314 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-23 00:56:09.823317 | orchestrator | Monday 23 March 2026 00:51:06 +0000 (0:00:00.278) 0:05:21.006 ********** 2026-03-23 00:56:09.823320 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823323 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823326 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823329 | orchestrator | 2026-03-23 00:56:09.823332 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-23 00:56:09.823335 | orchestrator | Monday 23 March 2026 00:51:06 +0000 (0:00:00.313) 0:05:21.320 ********** 2026-03-23 00:56:09.823339 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-23 00:56:09.823342 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-23 00:56:09.823345 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-23 00:56:09.823348 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823351 | orchestrator | 2026-03-23 00:56:09.823354 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-23 00:56:09.823357 | orchestrator | Monday 23 March 2026 00:51:07 +0000 (0:00:00.558) 0:05:21.878 ********** 2026-03-23 00:56:09.823360 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.823363 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.823375 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.823378 | orchestrator | 2026-03-23 00:56:09.823382 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-23 00:56:09.823385 | orchestrator | 2026-03-23 00:56:09.823388 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-23 00:56:09.823391 | orchestrator | Monday 23 March 2026 00:51:07 +0000 (0:00:00.679) 0:05:22.557 ********** 2026-03-23 00:56:09.823394 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.823397 | orchestrator | 2026-03-23 00:56:09.823400 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-23 00:56:09.823403 | orchestrator | Monday 23 March 2026 00:51:08 +0000 (0:00:00.444) 0:05:23.001 ********** 2026-03-23 00:56:09.823406 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.823410 | orchestrator | 2026-03-23 00:56:09.823413 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-23 00:56:09.823416 | orchestrator | Monday 23 March 2026 00:51:08 +0000 (0:00:00.449) 0:05:23.451 ********** 2026-03-23 00:56:09.823421 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.823426 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.823431 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.823437 | orchestrator | 2026-03-23 00:56:09.823442 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-23 00:56:09.823447 | orchestrator | Monday 23 March 2026 00:51:09 +0000 (0:00:00.887) 0:05:24.339 ********** 2026-03-23 00:56:09.823452 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823457 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823462 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823470 | orchestrator | 2026-03-23 00:56:09.823475 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-23 00:56:09.823479 | orchestrator | Monday 23 March 2026 00:51:09 +0000 (0:00:00.255) 0:05:24.594 ********** 2026-03-23 00:56:09.823485 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823490 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823495 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823500 | orchestrator | 2026-03-23 00:56:09.823506 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-23 00:56:09.823511 | orchestrator | Monday 23 March 2026 00:51:10 +0000 (0:00:00.252) 0:05:24.846 ********** 2026-03-23 00:56:09.823517 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823522 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823528 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823533 | orchestrator | 2026-03-23 00:56:09.823536 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-23 00:56:09.823539 | orchestrator | Monday 23 March 2026 00:51:10 +0000 (0:00:00.257) 0:05:25.103 ********** 2026-03-23 00:56:09.823542 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.823545 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.823548 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.823551 | orchestrator | 2026-03-23 00:56:09.823554 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-23 00:56:09.823558 | orchestrator | Monday 23 March 2026 00:51:11 +0000 (0:00:00.954) 0:05:26.058 ********** 2026-03-23 00:56:09.823561 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823564 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823567 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823570 | orchestrator | 2026-03-23 00:56:09.823573 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-23 00:56:09.823579 | orchestrator | Monday 23 March 2026 00:51:11 +0000 (0:00:00.278) 0:05:26.336 ********** 2026-03-23 00:56:09.823583 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823594 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823599 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823604 | orchestrator | 2026-03-23 00:56:09.823609 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-23 00:56:09.823614 | orchestrator | Monday 23 March 2026 00:51:11 +0000 (0:00:00.257) 0:05:26.593 ********** 2026-03-23 00:56:09.823619 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.823624 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.823630 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.823634 | orchestrator | 2026-03-23 00:56:09.823638 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-23 00:56:09.823641 | orchestrator | Monday 23 March 2026 00:51:12 +0000 (0:00:00.644) 0:05:27.238 ********** 2026-03-23 00:56:09.823644 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.823647 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.823650 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.823653 | orchestrator | 2026-03-23 00:56:09.823656 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-23 00:56:09.823659 | orchestrator | Monday 23 March 2026 00:51:13 +0000 (0:00:00.846) 0:05:28.084 ********** 2026-03-23 00:56:09.823662 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823665 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823668 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823672 | orchestrator | 2026-03-23 00:56:09.823675 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-23 00:56:09.823678 | orchestrator | Monday 23 March 2026 00:51:13 +0000 (0:00:00.305) 0:05:28.390 ********** 2026-03-23 00:56:09.823681 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.823684 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.823687 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.823690 | orchestrator | 2026-03-23 00:56:09.823693 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-23 00:56:09.823699 | orchestrator | Monday 23 March 2026 00:51:13 +0000 (0:00:00.334) 0:05:28.725 ********** 2026-03-23 00:56:09.823703 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823706 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823709 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823712 | orchestrator | 2026-03-23 00:56:09.823715 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-23 00:56:09.823733 | orchestrator | Monday 23 March 2026 00:51:14 +0000 (0:00:00.307) 0:05:29.033 ********** 2026-03-23 00:56:09.823738 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823741 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823744 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823747 | orchestrator | 2026-03-23 00:56:09.823750 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-23 00:56:09.823753 | orchestrator | Monday 23 March 2026 00:51:14 +0000 (0:00:00.538) 0:05:29.571 ********** 2026-03-23 00:56:09.823756 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823759 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823762 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823765 | orchestrator | 2026-03-23 00:56:09.823769 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-23 00:56:09.823772 | orchestrator | Monday 23 March 2026 00:51:15 +0000 (0:00:00.299) 0:05:29.871 ********** 2026-03-23 00:56:09.823775 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823778 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823781 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823784 | orchestrator | 2026-03-23 00:56:09.823787 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-23 00:56:09.823790 | orchestrator | Monday 23 March 2026 00:51:15 +0000 (0:00:00.321) 0:05:30.193 ********** 2026-03-23 00:56:09.823793 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823796 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823799 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823803 | orchestrator | 2026-03-23 00:56:09.823806 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-23 00:56:09.823809 | orchestrator | Monday 23 March 2026 00:51:15 +0000 (0:00:00.294) 0:05:30.487 ********** 2026-03-23 00:56:09.823812 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.823815 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.823818 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.823821 | orchestrator | 2026-03-23 00:56:09.823824 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-23 00:56:09.823827 | orchestrator | Monday 23 March 2026 00:51:16 +0000 (0:00:00.316) 0:05:30.804 ********** 2026-03-23 00:56:09.823831 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.823834 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.823839 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.823844 | orchestrator | 2026-03-23 00:56:09.823849 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-23 00:56:09.823853 | orchestrator | Monday 23 March 2026 00:51:16 +0000 (0:00:00.574) 0:05:31.378 ********** 2026-03-23 00:56:09.823858 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.823864 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.823869 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.823874 | orchestrator | 2026-03-23 00:56:09.823880 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-23 00:56:09.823884 | orchestrator | Monday 23 March 2026 00:51:17 +0000 (0:00:00.529) 0:05:31.907 ********** 2026-03-23 00:56:09.823890 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-23 00:56:09.823895 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-23 00:56:09.823900 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-23 00:56:09.823905 | orchestrator | 2026-03-23 00:56:09.823911 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-23 00:56:09.823919 | orchestrator | Monday 23 March 2026 00:51:17 +0000 (0:00:00.805) 0:05:32.713 ********** 2026-03-23 00:56:09.823923 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.823926 | orchestrator | 2026-03-23 00:56:09.823929 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-23 00:56:09.823934 | orchestrator | Monday 23 March 2026 00:51:18 +0000 (0:00:00.782) 0:05:33.496 ********** 2026-03-23 00:56:09.823937 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.823940 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.823943 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.823946 | orchestrator | 2026-03-23 00:56:09.823949 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-23 00:56:09.823952 | orchestrator | Monday 23 March 2026 00:51:19 +0000 (0:00:00.763) 0:05:34.260 ********** 2026-03-23 00:56:09.823955 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.823959 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.823962 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.823965 | orchestrator | 2026-03-23 00:56:09.823968 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-23 00:56:09.823971 | orchestrator | Monday 23 March 2026 00:51:19 +0000 (0:00:00.288) 0:05:34.548 ********** 2026-03-23 00:56:09.823974 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-23 00:56:09.823977 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-23 00:56:09.823980 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-23 00:56:09.823983 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-23 00:56:09.823986 | orchestrator | 2026-03-23 00:56:09.823989 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-23 00:56:09.823992 | orchestrator | Monday 23 March 2026 00:51:27 +0000 (0:00:07.804) 0:05:42.353 ********** 2026-03-23 00:56:09.823995 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.823998 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.824001 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.824004 | orchestrator | 2026-03-23 00:56:09.824007 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-23 00:56:09.824010 | orchestrator | Monday 23 March 2026 00:51:28 +0000 (0:00:00.460) 0:05:42.813 ********** 2026-03-23 00:56:09.824014 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-23 00:56:09.824017 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-23 00:56:09.824020 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-23 00:56:09.824023 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-23 00:56:09.824026 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:56:09.824041 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:56:09.824044 | orchestrator | 2026-03-23 00:56:09.824047 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-23 00:56:09.824051 | orchestrator | Monday 23 March 2026 00:51:29 +0000 (0:00:01.779) 0:05:44.593 ********** 2026-03-23 00:56:09.824054 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-23 00:56:09.824057 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-23 00:56:09.824060 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-23 00:56:09.824063 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-23 00:56:09.824066 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-23 00:56:09.824069 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-23 00:56:09.824072 | orchestrator | 2026-03-23 00:56:09.824075 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-23 00:56:09.824078 | orchestrator | Monday 23 March 2026 00:51:31 +0000 (0:00:01.267) 0:05:45.860 ********** 2026-03-23 00:56:09.824094 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.824100 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.824103 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.824106 | orchestrator | 2026-03-23 00:56:09.824109 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-23 00:56:09.824113 | orchestrator | Monday 23 March 2026 00:51:31 +0000 (0:00:00.634) 0:05:46.495 ********** 2026-03-23 00:56:09.824116 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.824119 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.824122 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.824125 | orchestrator | 2026-03-23 00:56:09.824128 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-23 00:56:09.824131 | orchestrator | Monday 23 March 2026 00:51:32 +0000 (0:00:00.386) 0:05:46.881 ********** 2026-03-23 00:56:09.824134 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.824137 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.824140 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.824143 | orchestrator | 2026-03-23 00:56:09.824146 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-23 00:56:09.824150 | orchestrator | Monday 23 March 2026 00:51:32 +0000 (0:00:00.220) 0:05:47.101 ********** 2026-03-23 00:56:09.824153 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.824156 | orchestrator | 2026-03-23 00:56:09.824159 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-23 00:56:09.824162 | orchestrator | Monday 23 March 2026 00:51:32 +0000 (0:00:00.399) 0:05:47.501 ********** 2026-03-23 00:56:09.824165 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.824168 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.824171 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.824174 | orchestrator | 2026-03-23 00:56:09.824177 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-23 00:56:09.824180 | orchestrator | Monday 23 March 2026 00:51:32 +0000 (0:00:00.251) 0:05:47.752 ********** 2026-03-23 00:56:09.824184 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.824187 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.824190 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.824193 | orchestrator | 2026-03-23 00:56:09.824196 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-23 00:56:09.824199 | orchestrator | Monday 23 March 2026 00:51:33 +0000 (0:00:00.456) 0:05:48.208 ********** 2026-03-23 00:56:09.824202 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.824205 | orchestrator | 2026-03-23 00:56:09.824208 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-23 00:56:09.824213 | orchestrator | Monday 23 March 2026 00:51:33 +0000 (0:00:00.445) 0:05:48.654 ********** 2026-03-23 00:56:09.824216 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.824219 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.824222 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.824225 | orchestrator | 2026-03-23 00:56:09.824229 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-23 00:56:09.824232 | orchestrator | Monday 23 March 2026 00:51:35 +0000 (0:00:01.229) 0:05:49.883 ********** 2026-03-23 00:56:09.824235 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.824238 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.824241 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.824244 | orchestrator | 2026-03-23 00:56:09.824247 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-23 00:56:09.824250 | orchestrator | Monday 23 March 2026 00:51:36 +0000 (0:00:01.490) 0:05:51.374 ********** 2026-03-23 00:56:09.824253 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.824256 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.824259 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.824262 | orchestrator | 2026-03-23 00:56:09.824265 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-23 00:56:09.824271 | orchestrator | Monday 23 March 2026 00:51:38 +0000 (0:00:01.899) 0:05:53.274 ********** 2026-03-23 00:56:09.824274 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.824277 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.824280 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.824283 | orchestrator | 2026-03-23 00:56:09.824286 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-23 00:56:09.824289 | orchestrator | Monday 23 March 2026 00:51:40 +0000 (0:00:01.981) 0:05:55.255 ********** 2026-03-23 00:56:09.824293 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.824296 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.824299 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-23 00:56:09.824302 | orchestrator | 2026-03-23 00:56:09.824305 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-23 00:56:09.824308 | orchestrator | Monday 23 March 2026 00:51:40 +0000 (0:00:00.421) 0:05:55.676 ********** 2026-03-23 00:56:09.824321 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-23 00:56:09.824325 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-23 00:56:09.824328 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-23 00:56:09.824331 | orchestrator | 2026-03-23 00:56:09.824334 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-23 00:56:09.824338 | orchestrator | Monday 23 March 2026 00:51:54 +0000 (0:00:13.337) 0:06:09.014 ********** 2026-03-23 00:56:09.824341 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-23 00:56:09.824344 | orchestrator | 2026-03-23 00:56:09.824347 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-23 00:56:09.824350 | orchestrator | Monday 23 March 2026 00:51:55 +0000 (0:00:01.241) 0:06:10.255 ********** 2026-03-23 00:56:09.824353 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.824356 | orchestrator | 2026-03-23 00:56:09.824360 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-23 00:56:09.824363 | orchestrator | Monday 23 March 2026 00:51:55 +0000 (0:00:00.273) 0:06:10.529 ********** 2026-03-23 00:56:09.824366 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.824369 | orchestrator | 2026-03-23 00:56:09.824372 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-23 00:56:09.824375 | orchestrator | Monday 23 March 2026 00:51:55 +0000 (0:00:00.135) 0:06:10.664 ********** 2026-03-23 00:56:09.824378 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-23 00:56:09.824381 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-23 00:56:09.824384 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-23 00:56:09.824388 | orchestrator | 2026-03-23 00:56:09.824393 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-23 00:56:09.824398 | orchestrator | Monday 23 March 2026 00:52:01 +0000 (0:00:05.931) 0:06:16.595 ********** 2026-03-23 00:56:09.824403 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-23 00:56:09.824408 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-23 00:56:09.824414 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-23 00:56:09.824420 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-23 00:56:09.824426 | orchestrator | 2026-03-23 00:56:09.824431 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-23 00:56:09.824435 | orchestrator | Monday 23 March 2026 00:52:06 +0000 (0:00:04.495) 0:06:21.091 ********** 2026-03-23 00:56:09.824438 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.824441 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.824447 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.824450 | orchestrator | 2026-03-23 00:56:09.824453 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-23 00:56:09.824456 | orchestrator | Monday 23 March 2026 00:52:07 +0000 (0:00:00.816) 0:06:21.907 ********** 2026-03-23 00:56:09.824459 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.824462 | orchestrator | 2026-03-23 00:56:09.824465 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-23 00:56:09.824468 | orchestrator | Monday 23 March 2026 00:52:07 +0000 (0:00:00.435) 0:06:22.343 ********** 2026-03-23 00:56:09.824471 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.824474 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.824478 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.824481 | orchestrator | 2026-03-23 00:56:09.824486 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-23 00:56:09.824489 | orchestrator | Monday 23 March 2026 00:52:07 +0000 (0:00:00.254) 0:06:22.598 ********** 2026-03-23 00:56:09.824492 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.824495 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.824498 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.824501 | orchestrator | 2026-03-23 00:56:09.824504 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-23 00:56:09.824507 | orchestrator | Monday 23 March 2026 00:52:09 +0000 (0:00:01.383) 0:06:23.981 ********** 2026-03-23 00:56:09.824510 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-23 00:56:09.824513 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-23 00:56:09.824517 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-23 00:56:09.824520 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.824523 | orchestrator | 2026-03-23 00:56:09.824526 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-23 00:56:09.824529 | orchestrator | Monday 23 March 2026 00:52:09 +0000 (0:00:00.555) 0:06:24.536 ********** 2026-03-23 00:56:09.824532 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.824535 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.824539 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.824542 | orchestrator | 2026-03-23 00:56:09.824545 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-23 00:56:09.824548 | orchestrator | 2026-03-23 00:56:09.824551 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-23 00:56:09.824554 | orchestrator | Monday 23 March 2026 00:52:10 +0000 (0:00:00.554) 0:06:25.091 ********** 2026-03-23 00:56:09.824557 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.824560 | orchestrator | 2026-03-23 00:56:09.824563 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-23 00:56:09.824566 | orchestrator | Monday 23 March 2026 00:52:10 +0000 (0:00:00.599) 0:06:25.690 ********** 2026-03-23 00:56:09.824581 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.824584 | orchestrator | 2026-03-23 00:56:09.824587 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-23 00:56:09.824590 | orchestrator | Monday 23 March 2026 00:52:11 +0000 (0:00:00.467) 0:06:26.157 ********** 2026-03-23 00:56:09.824594 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.824597 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.824600 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.824603 | orchestrator | 2026-03-23 00:56:09.824606 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-23 00:56:09.824609 | orchestrator | Monday 23 March 2026 00:52:11 +0000 (0:00:00.248) 0:06:26.406 ********** 2026-03-23 00:56:09.824614 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.824619 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.824624 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.824629 | orchestrator | 2026-03-23 00:56:09.824634 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-23 00:56:09.824638 | orchestrator | Monday 23 March 2026 00:52:12 +0000 (0:00:00.896) 0:06:27.302 ********** 2026-03-23 00:56:09.824642 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.824647 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.824651 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.824656 | orchestrator | 2026-03-23 00:56:09.824660 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-23 00:56:09.824665 | orchestrator | Monday 23 March 2026 00:52:13 +0000 (0:00:00.814) 0:06:28.117 ********** 2026-03-23 00:56:09.824669 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.824674 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.824678 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.824683 | orchestrator | 2026-03-23 00:56:09.824688 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-23 00:56:09.824693 | orchestrator | Monday 23 March 2026 00:52:14 +0000 (0:00:00.706) 0:06:28.824 ********** 2026-03-23 00:56:09.824698 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.824702 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.824707 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.824712 | orchestrator | 2026-03-23 00:56:09.824716 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-23 00:56:09.824721 | orchestrator | Monday 23 March 2026 00:52:14 +0000 (0:00:00.285) 0:06:29.109 ********** 2026-03-23 00:56:09.824726 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.824731 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.824736 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.824741 | orchestrator | 2026-03-23 00:56:09.824746 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-23 00:56:09.824752 | orchestrator | Monday 23 March 2026 00:52:14 +0000 (0:00:00.434) 0:06:29.544 ********** 2026-03-23 00:56:09.824757 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.824763 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.824768 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.824773 | orchestrator | 2026-03-23 00:56:09.824776 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-23 00:56:09.824779 | orchestrator | Monday 23 March 2026 00:52:15 +0000 (0:00:00.243) 0:06:29.787 ********** 2026-03-23 00:56:09.824782 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.824785 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.824788 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.824791 | orchestrator | 2026-03-23 00:56:09.824794 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-23 00:56:09.824798 | orchestrator | Monday 23 March 2026 00:52:15 +0000 (0:00:00.700) 0:06:30.487 ********** 2026-03-23 00:56:09.824801 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.824804 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.824807 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.824810 | orchestrator | 2026-03-23 00:56:09.824813 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-23 00:56:09.824819 | orchestrator | Monday 23 March 2026 00:52:16 +0000 (0:00:00.811) 0:06:31.299 ********** 2026-03-23 00:56:09.824822 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.824825 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.824828 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.824831 | orchestrator | 2026-03-23 00:56:09.824834 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-23 00:56:09.824837 | orchestrator | Monday 23 March 2026 00:52:16 +0000 (0:00:00.438) 0:06:31.737 ********** 2026-03-23 00:56:09.824840 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.824843 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.824851 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.824854 | orchestrator | 2026-03-23 00:56:09.824857 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-23 00:56:09.824860 | orchestrator | Monday 23 March 2026 00:52:17 +0000 (0:00:00.267) 0:06:32.005 ********** 2026-03-23 00:56:09.824863 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.824866 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.824870 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.824873 | orchestrator | 2026-03-23 00:56:09.824876 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-23 00:56:09.824879 | orchestrator | Monday 23 March 2026 00:52:17 +0000 (0:00:00.313) 0:06:32.318 ********** 2026-03-23 00:56:09.824882 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.824885 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.824888 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.824891 | orchestrator | 2026-03-23 00:56:09.824894 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-23 00:56:09.824897 | orchestrator | Monday 23 March 2026 00:52:17 +0000 (0:00:00.270) 0:06:32.588 ********** 2026-03-23 00:56:09.824900 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.824903 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.824906 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.824909 | orchestrator | 2026-03-23 00:56:09.824913 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-23 00:56:09.824916 | orchestrator | Monday 23 March 2026 00:52:18 +0000 (0:00:00.497) 0:06:33.086 ********** 2026-03-23 00:56:09.824919 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.824922 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.824925 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.824928 | orchestrator | 2026-03-23 00:56:09.824934 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-23 00:56:09.824937 | orchestrator | Monday 23 March 2026 00:52:18 +0000 (0:00:00.278) 0:06:33.365 ********** 2026-03-23 00:56:09.824940 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.824943 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.824946 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.824950 | orchestrator | 2026-03-23 00:56:09.824953 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-23 00:56:09.824956 | orchestrator | Monday 23 March 2026 00:52:18 +0000 (0:00:00.262) 0:06:33.627 ********** 2026-03-23 00:56:09.824959 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.824962 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.824965 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.824968 | orchestrator | 2026-03-23 00:56:09.824971 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-23 00:56:09.824974 | orchestrator | Monday 23 March 2026 00:52:19 +0000 (0:00:00.314) 0:06:33.942 ********** 2026-03-23 00:56:09.824977 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.824980 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.824984 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.824987 | orchestrator | 2026-03-23 00:56:09.824990 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-23 00:56:09.824993 | orchestrator | Monday 23 March 2026 00:52:19 +0000 (0:00:00.479) 0:06:34.422 ********** 2026-03-23 00:56:09.824996 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.824999 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.825002 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.825005 | orchestrator | 2026-03-23 00:56:09.825008 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-23 00:56:09.825011 | orchestrator | Monday 23 March 2026 00:52:20 +0000 (0:00:00.487) 0:06:34.909 ********** 2026-03-23 00:56:09.825014 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.825018 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.825021 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.825024 | orchestrator | 2026-03-23 00:56:09.825027 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-23 00:56:09.825032 | orchestrator | Monday 23 March 2026 00:52:20 +0000 (0:00:00.302) 0:06:35.212 ********** 2026-03-23 00:56:09.825035 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-23 00:56:09.825038 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-23 00:56:09.825041 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-23 00:56:09.825044 | orchestrator | 2026-03-23 00:56:09.825047 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-23 00:56:09.825050 | orchestrator | Monday 23 March 2026 00:52:21 +0000 (0:00:00.734) 0:06:35.946 ********** 2026-03-23 00:56:09.825054 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.825057 | orchestrator | 2026-03-23 00:56:09.825060 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-23 00:56:09.825063 | orchestrator | Monday 23 March 2026 00:52:21 +0000 (0:00:00.605) 0:06:36.552 ********** 2026-03-23 00:56:09.825066 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825069 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825072 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.825075 | orchestrator | 2026-03-23 00:56:09.825078 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-23 00:56:09.825092 | orchestrator | Monday 23 March 2026 00:52:22 +0000 (0:00:00.254) 0:06:36.806 ********** 2026-03-23 00:56:09.825095 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825098 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825103 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.825106 | orchestrator | 2026-03-23 00:56:09.825109 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-23 00:56:09.825112 | orchestrator | Monday 23 March 2026 00:52:22 +0000 (0:00:00.231) 0:06:37.038 ********** 2026-03-23 00:56:09.825116 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.825119 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.825122 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.825125 | orchestrator | 2026-03-23 00:56:09.825128 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-23 00:56:09.825131 | orchestrator | Monday 23 March 2026 00:52:23 +0000 (0:00:00.760) 0:06:37.798 ********** 2026-03-23 00:56:09.825134 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.825137 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.825140 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.825143 | orchestrator | 2026-03-23 00:56:09.825146 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-23 00:56:09.825150 | orchestrator | Monday 23 March 2026 00:52:23 +0000 (0:00:00.321) 0:06:38.120 ********** 2026-03-23 00:56:09.825153 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-23 00:56:09.825156 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-23 00:56:09.825159 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-23 00:56:09.825162 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-23 00:56:09.825165 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-23 00:56:09.825168 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-23 00:56:09.825171 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-23 00:56:09.825174 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-23 00:56:09.825180 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-23 00:56:09.825184 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-23 00:56:09.825189 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-23 00:56:09.825192 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-23 00:56:09.825195 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-23 00:56:09.825198 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-23 00:56:09.825201 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-23 00:56:09.825204 | orchestrator | 2026-03-23 00:56:09.825207 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-23 00:56:09.825210 | orchestrator | Monday 23 March 2026 00:52:27 +0000 (0:00:03.961) 0:06:42.081 ********** 2026-03-23 00:56:09.825214 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825217 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825220 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.825223 | orchestrator | 2026-03-23 00:56:09.825226 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-23 00:56:09.825229 | orchestrator | Monday 23 March 2026 00:52:27 +0000 (0:00:00.242) 0:06:42.324 ********** 2026-03-23 00:56:09.825232 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.825235 | orchestrator | 2026-03-23 00:56:09.825238 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-23 00:56:09.825241 | orchestrator | Monday 23 March 2026 00:52:28 +0000 (0:00:00.593) 0:06:42.917 ********** 2026-03-23 00:56:09.825245 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-23 00:56:09.825248 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-23 00:56:09.825251 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-23 00:56:09.825254 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-23 00:56:09.825257 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-23 00:56:09.825260 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-23 00:56:09.825263 | orchestrator | 2026-03-23 00:56:09.825266 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-23 00:56:09.825269 | orchestrator | Monday 23 March 2026 00:52:29 +0000 (0:00:00.909) 0:06:43.827 ********** 2026-03-23 00:56:09.825272 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:56:09.825276 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-23 00:56:09.825279 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-23 00:56:09.825282 | orchestrator | 2026-03-23 00:56:09.825285 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-23 00:56:09.825288 | orchestrator | Monday 23 March 2026 00:52:30 +0000 (0:00:01.752) 0:06:45.579 ********** 2026-03-23 00:56:09.825291 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-23 00:56:09.825294 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-23 00:56:09.825297 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.825300 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-23 00:56:09.825303 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-23 00:56:09.825306 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.825309 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-23 00:56:09.825314 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-23 00:56:09.825318 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.825321 | orchestrator | 2026-03-23 00:56:09.825324 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-23 00:56:09.825327 | orchestrator | Monday 23 March 2026 00:52:32 +0000 (0:00:01.522) 0:06:47.102 ********** 2026-03-23 00:56:09.825332 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-23 00:56:09.825335 | orchestrator | 2026-03-23 00:56:09.825338 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-23 00:56:09.825341 | orchestrator | Monday 23 March 2026 00:52:34 +0000 (0:00:01.912) 0:06:49.015 ********** 2026-03-23 00:56:09.825345 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.825348 | orchestrator | 2026-03-23 00:56:09.825351 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-23 00:56:09.825354 | orchestrator | Monday 23 March 2026 00:52:34 +0000 (0:00:00.536) 0:06:49.551 ********** 2026-03-23 00:56:09.825357 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4e8fe5fb-1ce5-58e9-8668-0121db885e3a', 'data_vg': 'ceph-4e8fe5fb-1ce5-58e9-8668-0121db885e3a'}) 2026-03-23 00:56:09.825361 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b7e7e409-387b-5e35-af60-96efea6ce8aa', 'data_vg': 'ceph-b7e7e409-387b-5e35-af60-96efea6ce8aa'}) 2026-03-23 00:56:09.825364 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1bf36823-02d4-5086-a00f-5e3efdd328af', 'data_vg': 'ceph-1bf36823-02d4-5086-a00f-5e3efdd328af'}) 2026-03-23 00:56:09.825368 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-64892dc7-40b9-50f4-a971-7ffdf1a56e40', 'data_vg': 'ceph-64892dc7-40b9-50f4-a971-7ffdf1a56e40'}) 2026-03-23 00:56:09.825373 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6fa6fe99-be0d-55bf-a5b2-66c7db596be7', 'data_vg': 'ceph-6fa6fe99-be0d-55bf-a5b2-66c7db596be7'}) 2026-03-23 00:56:09.825376 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-92a7bb1e-121d-56dc-8fa7-94c9c65422a6', 'data_vg': 'ceph-92a7bb1e-121d-56dc-8fa7-94c9c65422a6'}) 2026-03-23 00:56:09.825379 | orchestrator | 2026-03-23 00:56:09.825382 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-23 00:56:09.825386 | orchestrator | Monday 23 March 2026 00:53:06 +0000 (0:00:31.637) 0:07:21.188 ********** 2026-03-23 00:56:09.825389 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825392 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825395 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.825398 | orchestrator | 2026-03-23 00:56:09.825401 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-23 00:56:09.825404 | orchestrator | Monday 23 March 2026 00:53:06 +0000 (0:00:00.547) 0:07:21.736 ********** 2026-03-23 00:56:09.825407 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.825410 | orchestrator | 2026-03-23 00:56:09.825413 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-23 00:56:09.825416 | orchestrator | Monday 23 March 2026 00:53:07 +0000 (0:00:00.513) 0:07:22.250 ********** 2026-03-23 00:56:09.825420 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.825423 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.825426 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.825429 | orchestrator | 2026-03-23 00:56:09.825432 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-23 00:56:09.825435 | orchestrator | Monday 23 March 2026 00:53:08 +0000 (0:00:00.635) 0:07:22.885 ********** 2026-03-23 00:56:09.825438 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.825441 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.825444 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.825447 | orchestrator | 2026-03-23 00:56:09.825450 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-23 00:56:09.825454 | orchestrator | Monday 23 March 2026 00:53:09 +0000 (0:00:01.601) 0:07:24.486 ********** 2026-03-23 00:56:09.825457 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.825460 | orchestrator | 2026-03-23 00:56:09.825463 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-23 00:56:09.825468 | orchestrator | Monday 23 March 2026 00:53:10 +0000 (0:00:00.454) 0:07:24.941 ********** 2026-03-23 00:56:09.825471 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.825474 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.825477 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.825480 | orchestrator | 2026-03-23 00:56:09.825483 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-23 00:56:09.825487 | orchestrator | Monday 23 March 2026 00:53:11 +0000 (0:00:01.111) 0:07:26.052 ********** 2026-03-23 00:56:09.825490 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.825493 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.825496 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.825499 | orchestrator | 2026-03-23 00:56:09.825502 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-23 00:56:09.825505 | orchestrator | Monday 23 March 2026 00:53:12 +0000 (0:00:01.462) 0:07:27.515 ********** 2026-03-23 00:56:09.825508 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.825511 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.825514 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.825517 | orchestrator | 2026-03-23 00:56:09.825520 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-23 00:56:09.825524 | orchestrator | Monday 23 March 2026 00:53:14 +0000 (0:00:02.063) 0:07:29.578 ********** 2026-03-23 00:56:09.825529 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825532 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825535 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.825538 | orchestrator | 2026-03-23 00:56:09.825541 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-23 00:56:09.825544 | orchestrator | Monday 23 March 2026 00:53:15 +0000 (0:00:00.323) 0:07:29.902 ********** 2026-03-23 00:56:09.825547 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825550 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825553 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.825556 | orchestrator | 2026-03-23 00:56:09.825560 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-23 00:56:09.825563 | orchestrator | Monday 23 March 2026 00:53:15 +0000 (0:00:00.348) 0:07:30.250 ********** 2026-03-23 00:56:09.825566 | orchestrator | ok: [testbed-node-3] => (item=2) 2026-03-23 00:56:09.825569 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-23 00:56:09.825572 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-23 00:56:09.825575 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-23 00:56:09.825578 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-23 00:56:09.825581 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-23 00:56:09.825584 | orchestrator | 2026-03-23 00:56:09.825587 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-23 00:56:09.825590 | orchestrator | Monday 23 March 2026 00:53:16 +0000 (0:00:01.366) 0:07:31.617 ********** 2026-03-23 00:56:09.825594 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-03-23 00:56:09.825597 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-23 00:56:09.825600 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-23 00:56:09.825603 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-23 00:56:09.825606 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-23 00:56:09.825609 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-23 00:56:09.825612 | orchestrator | 2026-03-23 00:56:09.825615 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-23 00:56:09.825618 | orchestrator | Monday 23 March 2026 00:53:19 +0000 (0:00:02.340) 0:07:33.957 ********** 2026-03-23 00:56:09.825621 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-03-23 00:56:09.825624 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-23 00:56:09.825629 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-23 00:56:09.825633 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-23 00:56:09.825638 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-23 00:56:09.825641 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-23 00:56:09.825644 | orchestrator | 2026-03-23 00:56:09.825647 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-23 00:56:09.825650 | orchestrator | Monday 23 March 2026 00:53:23 +0000 (0:00:03.841) 0:07:37.798 ********** 2026-03-23 00:56:09.825653 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825656 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825659 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-23 00:56:09.825662 | orchestrator | 2026-03-23 00:56:09.825665 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-23 00:56:09.825668 | orchestrator | Monday 23 March 2026 00:53:25 +0000 (0:00:02.035) 0:07:39.834 ********** 2026-03-23 00:56:09.825672 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825675 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825678 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-23 00:56:09.825681 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-23 00:56:09.825684 | orchestrator | 2026-03-23 00:56:09.825687 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-23 00:56:09.825690 | orchestrator | Monday 23 March 2026 00:53:37 +0000 (0:00:12.814) 0:07:52.649 ********** 2026-03-23 00:56:09.825693 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825696 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825699 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.825702 | orchestrator | 2026-03-23 00:56:09.825706 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-23 00:56:09.825709 | orchestrator | Monday 23 March 2026 00:53:38 +0000 (0:00:00.868) 0:07:53.517 ********** 2026-03-23 00:56:09.825712 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825715 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825718 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.825721 | orchestrator | 2026-03-23 00:56:09.825724 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-23 00:56:09.825727 | orchestrator | Monday 23 March 2026 00:53:39 +0000 (0:00:00.559) 0:07:54.076 ********** 2026-03-23 00:56:09.825730 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.825733 | orchestrator | 2026-03-23 00:56:09.825736 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-23 00:56:09.825739 | orchestrator | Monday 23 March 2026 00:53:39 +0000 (0:00:00.526) 0:07:54.603 ********** 2026-03-23 00:56:09.825742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.825746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.825749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.825752 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825755 | orchestrator | 2026-03-23 00:56:09.825758 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-23 00:56:09.825761 | orchestrator | Monday 23 March 2026 00:53:40 +0000 (0:00:00.384) 0:07:54.988 ********** 2026-03-23 00:56:09.825764 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825767 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825770 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.825773 | orchestrator | 2026-03-23 00:56:09.825776 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-23 00:56:09.825781 | orchestrator | Monday 23 March 2026 00:53:40 +0000 (0:00:00.311) 0:07:55.300 ********** 2026-03-23 00:56:09.825784 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825787 | orchestrator | 2026-03-23 00:56:09.825790 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-23 00:56:09.825796 | orchestrator | Monday 23 March 2026 00:53:41 +0000 (0:00:00.702) 0:07:56.002 ********** 2026-03-23 00:56:09.825799 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825802 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825805 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.825808 | orchestrator | 2026-03-23 00:56:09.825811 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-23 00:56:09.825814 | orchestrator | Monday 23 March 2026 00:53:41 +0000 (0:00:00.311) 0:07:56.314 ********** 2026-03-23 00:56:09.825817 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825820 | orchestrator | 2026-03-23 00:56:09.825823 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-23 00:56:09.825826 | orchestrator | Monday 23 March 2026 00:53:41 +0000 (0:00:00.216) 0:07:56.531 ********** 2026-03-23 00:56:09.825829 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825833 | orchestrator | 2026-03-23 00:56:09.825836 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-23 00:56:09.825839 | orchestrator | Monday 23 March 2026 00:53:42 +0000 (0:00:00.243) 0:07:56.774 ********** 2026-03-23 00:56:09.825842 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825845 | orchestrator | 2026-03-23 00:56:09.825848 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-23 00:56:09.825851 | orchestrator | Monday 23 March 2026 00:53:42 +0000 (0:00:00.127) 0:07:56.902 ********** 2026-03-23 00:56:09.825854 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825857 | orchestrator | 2026-03-23 00:56:09.825860 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-23 00:56:09.825863 | orchestrator | Monday 23 March 2026 00:53:42 +0000 (0:00:00.215) 0:07:57.117 ********** 2026-03-23 00:56:09.825866 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825870 | orchestrator | 2026-03-23 00:56:09.825873 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-23 00:56:09.825876 | orchestrator | Monday 23 March 2026 00:53:42 +0000 (0:00:00.220) 0:07:57.338 ********** 2026-03-23 00:56:09.825881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.825884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.825887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.825890 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825893 | orchestrator | 2026-03-23 00:56:09.825896 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-23 00:56:09.825899 | orchestrator | Monday 23 March 2026 00:53:42 +0000 (0:00:00.371) 0:07:57.709 ********** 2026-03-23 00:56:09.825902 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825906 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825909 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.825912 | orchestrator | 2026-03-23 00:56:09.825915 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-23 00:56:09.825918 | orchestrator | Monday 23 March 2026 00:53:43 +0000 (0:00:00.571) 0:07:58.280 ********** 2026-03-23 00:56:09.825921 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825924 | orchestrator | 2026-03-23 00:56:09.825927 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-23 00:56:09.825930 | orchestrator | Monday 23 March 2026 00:53:43 +0000 (0:00:00.220) 0:07:58.500 ********** 2026-03-23 00:56:09.825933 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825936 | orchestrator | 2026-03-23 00:56:09.825939 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-23 00:56:09.825942 | orchestrator | 2026-03-23 00:56:09.825946 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-23 00:56:09.825949 | orchestrator | Monday 23 March 2026 00:53:44 +0000 (0:00:00.635) 0:07:59.135 ********** 2026-03-23 00:56:09.825952 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.825958 | orchestrator | 2026-03-23 00:56:09.825961 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-23 00:56:09.825964 | orchestrator | Monday 23 March 2026 00:53:45 +0000 (0:00:01.237) 0:08:00.373 ********** 2026-03-23 00:56:09.825968 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.825971 | orchestrator | 2026-03-23 00:56:09.825974 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-23 00:56:09.825977 | orchestrator | Monday 23 March 2026 00:53:46 +0000 (0:00:01.205) 0:08:01.579 ********** 2026-03-23 00:56:09.825980 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.825983 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.825986 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.825989 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.825992 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.825996 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.825999 | orchestrator | 2026-03-23 00:56:09.826002 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-23 00:56:09.826005 | orchestrator | Monday 23 March 2026 00:53:48 +0000 (0:00:01.498) 0:08:03.077 ********** 2026-03-23 00:56:09.826008 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.826011 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.826036 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.826040 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.826043 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.826046 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.826049 | orchestrator | 2026-03-23 00:56:09.826052 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-23 00:56:09.826055 | orchestrator | Monday 23 March 2026 00:53:49 +0000 (0:00:00.908) 0:08:03.985 ********** 2026-03-23 00:56:09.826060 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.826063 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.826066 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.826069 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.826072 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.826075 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.826078 | orchestrator | 2026-03-23 00:56:09.826103 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-23 00:56:09.826106 | orchestrator | Monday 23 March 2026 00:53:50 +0000 (0:00:01.114) 0:08:05.100 ********** 2026-03-23 00:56:09.826109 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.826112 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.826115 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.826118 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.826122 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.826125 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.826128 | orchestrator | 2026-03-23 00:56:09.826131 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-23 00:56:09.826134 | orchestrator | Monday 23 March 2026 00:53:51 +0000 (0:00:00.809) 0:08:05.909 ********** 2026-03-23 00:56:09.826137 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.826140 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.826143 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.826146 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.826149 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.826155 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.826160 | orchestrator | 2026-03-23 00:56:09.826165 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-23 00:56:09.826169 | orchestrator | Monday 23 March 2026 00:53:52 +0000 (0:00:01.200) 0:08:07.110 ********** 2026-03-23 00:56:09.826174 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.826178 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.826185 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.826189 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.826194 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.826198 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.826203 | orchestrator | 2026-03-23 00:56:09.826208 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-23 00:56:09.826212 | orchestrator | Monday 23 March 2026 00:53:52 +0000 (0:00:00.571) 0:08:07.681 ********** 2026-03-23 00:56:09.826217 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.826225 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.826230 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.826235 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.826241 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.826246 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.826251 | orchestrator | 2026-03-23 00:56:09.826256 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-23 00:56:09.826262 | orchestrator | Monday 23 March 2026 00:53:53 +0000 (0:00:00.758) 0:08:08.440 ********** 2026-03-23 00:56:09.826268 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.826273 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.826277 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.826281 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.826284 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.826287 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.826290 | orchestrator | 2026-03-23 00:56:09.826293 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-23 00:56:09.826296 | orchestrator | Monday 23 March 2026 00:53:54 +0000 (0:00:00.901) 0:08:09.341 ********** 2026-03-23 00:56:09.826299 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.826302 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.826305 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.826308 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.826311 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.826314 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.826317 | orchestrator | 2026-03-23 00:56:09.826320 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-23 00:56:09.826324 | orchestrator | Monday 23 March 2026 00:53:55 +0000 (0:00:00.946) 0:08:10.288 ********** 2026-03-23 00:56:09.826327 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.826330 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.826333 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.826336 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.826339 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.826342 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.826345 | orchestrator | 2026-03-23 00:56:09.826377 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-23 00:56:09.826381 | orchestrator | Monday 23 March 2026 00:53:56 +0000 (0:00:00.643) 0:08:10.931 ********** 2026-03-23 00:56:09.826384 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.826387 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.826390 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.826393 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.826396 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.826399 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.826403 | orchestrator | 2026-03-23 00:56:09.826406 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-23 00:56:09.826409 | orchestrator | Monday 23 March 2026 00:53:56 +0000 (0:00:00.516) 0:08:11.448 ********** 2026-03-23 00:56:09.826412 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.826415 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.826418 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.826421 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.826425 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.826428 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.826431 | orchestrator | 2026-03-23 00:56:09.826437 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-23 00:56:09.826440 | orchestrator | Monday 23 March 2026 00:53:57 +0000 (0:00:00.634) 0:08:12.083 ********** 2026-03-23 00:56:09.826444 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.826447 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.826450 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.826453 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.826456 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.826459 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.826462 | orchestrator | 2026-03-23 00:56:09.826465 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-23 00:56:09.826472 | orchestrator | Monday 23 March 2026 00:53:57 +0000 (0:00:00.491) 0:08:12.574 ********** 2026-03-23 00:56:09.826476 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.826479 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.826482 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.826485 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.826488 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.826491 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.826494 | orchestrator | 2026-03-23 00:56:09.826498 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-23 00:56:09.826501 | orchestrator | Monday 23 March 2026 00:53:58 +0000 (0:00:00.681) 0:08:13.256 ********** 2026-03-23 00:56:09.826504 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.826507 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.826510 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.826513 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.826516 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.826519 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.826522 | orchestrator | 2026-03-23 00:56:09.826525 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-23 00:56:09.826529 | orchestrator | Monday 23 March 2026 00:53:58 +0000 (0:00:00.512) 0:08:13.768 ********** 2026-03-23 00:56:09.826532 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.826535 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.826538 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.826541 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:56:09.826544 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:56:09.826547 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:56:09.826550 | orchestrator | 2026-03-23 00:56:09.826553 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-23 00:56:09.826557 | orchestrator | Monday 23 March 2026 00:53:59 +0000 (0:00:00.814) 0:08:14.583 ********** 2026-03-23 00:56:09.826560 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.826563 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.826566 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.826569 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.826572 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.826575 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.826578 | orchestrator | 2026-03-23 00:56:09.826581 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-23 00:56:09.826587 | orchestrator | Monday 23 March 2026 00:54:00 +0000 (0:00:00.687) 0:08:15.271 ********** 2026-03-23 00:56:09.826591 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.826594 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.826597 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.826600 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.826603 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.826606 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.826609 | orchestrator | 2026-03-23 00:56:09.826612 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-23 00:56:09.826616 | orchestrator | Monday 23 March 2026 00:54:01 +0000 (0:00:00.847) 0:08:16.119 ********** 2026-03-23 00:56:09.826619 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.826624 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.826627 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.826631 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.826634 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.826637 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.826640 | orchestrator | 2026-03-23 00:56:09.826643 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-23 00:56:09.826646 | orchestrator | Monday 23 March 2026 00:54:02 +0000 (0:00:01.184) 0:08:17.303 ********** 2026-03-23 00:56:09.826649 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-23 00:56:09.826652 | orchestrator | 2026-03-23 00:56:09.826656 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-23 00:56:09.826659 | orchestrator | Monday 23 March 2026 00:54:05 +0000 (0:00:03.373) 0:08:20.677 ********** 2026-03-23 00:56:09.826662 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-23 00:56:09.826665 | orchestrator | 2026-03-23 00:56:09.826668 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-23 00:56:09.826671 | orchestrator | Monday 23 March 2026 00:54:07 +0000 (0:00:01.574) 0:08:22.251 ********** 2026-03-23 00:56:09.826674 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.826677 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.826681 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.826684 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.826687 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.826690 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.826693 | orchestrator | 2026-03-23 00:56:09.826696 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-23 00:56:09.826699 | orchestrator | Monday 23 March 2026 00:54:08 +0000 (0:00:01.475) 0:08:23.726 ********** 2026-03-23 00:56:09.826702 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.826705 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.826709 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.826712 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.826715 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.826718 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.826721 | orchestrator | 2026-03-23 00:56:09.826724 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-23 00:56:09.826727 | orchestrator | Monday 23 March 2026 00:54:10 +0000 (0:00:01.264) 0:08:24.991 ********** 2026-03-23 00:56:09.826732 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.826738 | orchestrator | 2026-03-23 00:56:09.826746 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-23 00:56:09.826752 | orchestrator | Monday 23 March 2026 00:54:11 +0000 (0:00:01.276) 0:08:26.267 ********** 2026-03-23 00:56:09.826756 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.826761 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.826766 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.826770 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.826775 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.826780 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.826785 | orchestrator | 2026-03-23 00:56:09.826792 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-23 00:56:09.826797 | orchestrator | Monday 23 March 2026 00:54:13 +0000 (0:00:01.538) 0:08:27.806 ********** 2026-03-23 00:56:09.826803 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.826808 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.826813 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.826818 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.826823 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.826829 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.826832 | orchestrator | 2026-03-23 00:56:09.826835 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-23 00:56:09.826842 | orchestrator | Monday 23 March 2026 00:54:16 +0000 (0:00:03.765) 0:08:31.571 ********** 2026-03-23 00:56:09.826845 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:56:09.826848 | orchestrator | 2026-03-23 00:56:09.826851 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-23 00:56:09.826854 | orchestrator | Monday 23 March 2026 00:54:18 +0000 (0:00:01.228) 0:08:32.800 ********** 2026-03-23 00:56:09.826858 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.826861 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.826864 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.826867 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.826870 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.826873 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.826876 | orchestrator | 2026-03-23 00:56:09.826879 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-23 00:56:09.826882 | orchestrator | Monday 23 March 2026 00:54:18 +0000 (0:00:00.660) 0:08:33.461 ********** 2026-03-23 00:56:09.826885 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.826888 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.826891 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.826894 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:56:09.826897 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:56:09.826900 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:56:09.826904 | orchestrator | 2026-03-23 00:56:09.826907 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-23 00:56:09.826913 | orchestrator | Monday 23 March 2026 00:54:21 +0000 (0:00:02.892) 0:08:36.353 ********** 2026-03-23 00:56:09.826916 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.826919 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.826922 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.826925 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:56:09.826928 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:56:09.826931 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:56:09.826934 | orchestrator | 2026-03-23 00:56:09.826937 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-23 00:56:09.826941 | orchestrator | 2026-03-23 00:56:09.826944 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-23 00:56:09.826947 | orchestrator | Monday 23 March 2026 00:54:22 +0000 (0:00:01.219) 0:08:37.573 ********** 2026-03-23 00:56:09.826950 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-03-23 00:56:09.826953 | orchestrator | 2026-03-23 00:56:09.826956 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-23 00:56:09.826960 | orchestrator | Monday 23 March 2026 00:54:23 +0000 (0:00:00.506) 0:08:38.079 ********** 2026-03-23 00:56:09.826965 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.826969 | orchestrator | 2026-03-23 00:56:09.826974 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-23 00:56:09.826979 | orchestrator | Monday 23 March 2026 00:54:23 +0000 (0:00:00.497) 0:08:38.577 ********** 2026-03-23 00:56:09.826984 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.826989 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.826995 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.826998 | orchestrator | 2026-03-23 00:56:09.827001 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-23 00:56:09.827004 | orchestrator | Monday 23 March 2026 00:54:24 +0000 (0:00:00.545) 0:08:39.122 ********** 2026-03-23 00:56:09.827008 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827011 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827014 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827021 | orchestrator | 2026-03-23 00:56:09.827024 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-23 00:56:09.827028 | orchestrator | Monday 23 March 2026 00:54:25 +0000 (0:00:00.752) 0:08:39.874 ********** 2026-03-23 00:56:09.827031 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827034 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827037 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827040 | orchestrator | 2026-03-23 00:56:09.827043 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-23 00:56:09.827046 | orchestrator | Monday 23 March 2026 00:54:25 +0000 (0:00:00.832) 0:08:40.707 ********** 2026-03-23 00:56:09.827049 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827052 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827055 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827058 | orchestrator | 2026-03-23 00:56:09.827061 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-23 00:56:09.827064 | orchestrator | Monday 23 March 2026 00:54:26 +0000 (0:00:00.712) 0:08:41.419 ********** 2026-03-23 00:56:09.827067 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827070 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.827073 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.827077 | orchestrator | 2026-03-23 00:56:09.827092 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-23 00:56:09.827097 | orchestrator | Monday 23 March 2026 00:54:27 +0000 (0:00:00.613) 0:08:42.033 ********** 2026-03-23 00:56:09.827100 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827103 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.827106 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.827109 | orchestrator | 2026-03-23 00:56:09.827115 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-23 00:56:09.827120 | orchestrator | Monday 23 March 2026 00:54:27 +0000 (0:00:00.303) 0:08:42.336 ********** 2026-03-23 00:56:09.827128 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827134 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.827138 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.827143 | orchestrator | 2026-03-23 00:56:09.827148 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-23 00:56:09.827153 | orchestrator | Monday 23 March 2026 00:54:27 +0000 (0:00:00.284) 0:08:42.621 ********** 2026-03-23 00:56:09.827158 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827162 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827167 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827171 | orchestrator | 2026-03-23 00:56:09.827176 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-23 00:56:09.827181 | orchestrator | Monday 23 March 2026 00:54:28 +0000 (0:00:00.768) 0:08:43.389 ********** 2026-03-23 00:56:09.827186 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827191 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827196 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827202 | orchestrator | 2026-03-23 00:56:09.827206 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-23 00:56:09.827212 | orchestrator | Monday 23 March 2026 00:54:29 +0000 (0:00:01.133) 0:08:44.523 ********** 2026-03-23 00:56:09.827216 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827220 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.827223 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.827226 | orchestrator | 2026-03-23 00:56:09.827229 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-23 00:56:09.827232 | orchestrator | Monday 23 March 2026 00:54:30 +0000 (0:00:00.301) 0:08:44.824 ********** 2026-03-23 00:56:09.827235 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827238 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.827241 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.827244 | orchestrator | 2026-03-23 00:56:09.827247 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-23 00:56:09.827254 | orchestrator | Monday 23 March 2026 00:54:30 +0000 (0:00:00.285) 0:08:45.109 ********** 2026-03-23 00:56:09.827258 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827261 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827267 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827270 | orchestrator | 2026-03-23 00:56:09.827273 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-23 00:56:09.827276 | orchestrator | Monday 23 March 2026 00:54:30 +0000 (0:00:00.264) 0:08:45.374 ********** 2026-03-23 00:56:09.827279 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827282 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827285 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827288 | orchestrator | 2026-03-23 00:56:09.827292 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-23 00:56:09.827295 | orchestrator | Monday 23 March 2026 00:54:31 +0000 (0:00:00.444) 0:08:45.819 ********** 2026-03-23 00:56:09.827298 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827301 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827304 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827307 | orchestrator | 2026-03-23 00:56:09.827310 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-23 00:56:09.827313 | orchestrator | Monday 23 March 2026 00:54:31 +0000 (0:00:00.303) 0:08:46.122 ********** 2026-03-23 00:56:09.827316 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827319 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.827322 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.827325 | orchestrator | 2026-03-23 00:56:09.827328 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-23 00:56:09.827331 | orchestrator | Monday 23 March 2026 00:54:31 +0000 (0:00:00.249) 0:08:46.372 ********** 2026-03-23 00:56:09.827335 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827338 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.827341 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.827344 | orchestrator | 2026-03-23 00:56:09.827347 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-23 00:56:09.827350 | orchestrator | Monday 23 March 2026 00:54:31 +0000 (0:00:00.258) 0:08:46.631 ********** 2026-03-23 00:56:09.827353 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827356 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.827359 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.827362 | orchestrator | 2026-03-23 00:56:09.827365 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-23 00:56:09.827368 | orchestrator | Monday 23 March 2026 00:54:32 +0000 (0:00:00.413) 0:08:47.044 ********** 2026-03-23 00:56:09.827372 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827375 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827378 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827381 | orchestrator | 2026-03-23 00:56:09.827384 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-23 00:56:09.827387 | orchestrator | Monday 23 March 2026 00:54:32 +0000 (0:00:00.333) 0:08:47.377 ********** 2026-03-23 00:56:09.827390 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827393 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827396 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827399 | orchestrator | 2026-03-23 00:56:09.827402 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-23 00:56:09.827405 | orchestrator | Monday 23 March 2026 00:54:33 +0000 (0:00:00.448) 0:08:47.825 ********** 2026-03-23 00:56:09.827408 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.827411 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.827415 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-23 00:56:09.827418 | orchestrator | 2026-03-23 00:56:09.827421 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-23 00:56:09.827424 | orchestrator | Monday 23 March 2026 00:54:33 +0000 (0:00:00.532) 0:08:48.358 ********** 2026-03-23 00:56:09.827431 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-23 00:56:09.827434 | orchestrator | 2026-03-23 00:56:09.827437 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-23 00:56:09.827442 | orchestrator | Monday 23 March 2026 00:54:35 +0000 (0:00:01.745) 0:08:50.103 ********** 2026-03-23 00:56:09.827447 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-23 00:56:09.827451 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827454 | orchestrator | 2026-03-23 00:56:09.827457 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-23 00:56:09.827460 | orchestrator | Monday 23 March 2026 00:54:35 +0000 (0:00:00.169) 0:08:50.273 ********** 2026-03-23 00:56:09.827464 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-23 00:56:09.827471 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-23 00:56:09.827475 | orchestrator | 2026-03-23 00:56:09.827478 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-23 00:56:09.827481 | orchestrator | Monday 23 March 2026 00:54:42 +0000 (0:00:06.681) 0:08:56.954 ********** 2026-03-23 00:56:09.827484 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-23 00:56:09.827487 | orchestrator | 2026-03-23 00:56:09.827490 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-23 00:56:09.827493 | orchestrator | Monday 23 March 2026 00:54:45 +0000 (0:00:03.167) 0:09:00.122 ********** 2026-03-23 00:56:09.827498 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.827502 | orchestrator | 2026-03-23 00:56:09.827505 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-23 00:56:09.827508 | orchestrator | Monday 23 March 2026 00:54:46 +0000 (0:00:00.649) 0:09:00.772 ********** 2026-03-23 00:56:09.827511 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-23 00:56:09.827514 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-23 00:56:09.827517 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-23 00:56:09.827520 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-23 00:56:09.827523 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-23 00:56:09.827526 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-23 00:56:09.827529 | orchestrator | 2026-03-23 00:56:09.827532 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-23 00:56:09.827535 | orchestrator | Monday 23 March 2026 00:54:46 +0000 (0:00:00.868) 0:09:01.640 ********** 2026-03-23 00:56:09.827538 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:56:09.827542 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-23 00:56:09.827545 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-23 00:56:09.827548 | orchestrator | 2026-03-23 00:56:09.827551 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-23 00:56:09.827554 | orchestrator | Monday 23 March 2026 00:54:48 +0000 (0:00:01.518) 0:09:03.159 ********** 2026-03-23 00:56:09.827557 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-23 00:56:09.827563 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-23 00:56:09.827566 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.827569 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-23 00:56:09.827572 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-23 00:56:09.827575 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.827578 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-23 00:56:09.827581 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-23 00:56:09.827584 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.827587 | orchestrator | 2026-03-23 00:56:09.827590 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-23 00:56:09.827593 | orchestrator | Monday 23 March 2026 00:54:49 +0000 (0:00:01.154) 0:09:04.314 ********** 2026-03-23 00:56:09.827596 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.827599 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.827603 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.827606 | orchestrator | 2026-03-23 00:56:09.827609 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-23 00:56:09.827612 | orchestrator | Monday 23 March 2026 00:54:51 +0000 (0:00:02.046) 0:09:06.361 ********** 2026-03-23 00:56:09.827615 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827618 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.827621 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.827624 | orchestrator | 2026-03-23 00:56:09.827627 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-23 00:56:09.827630 | orchestrator | Monday 23 March 2026 00:54:51 +0000 (0:00:00.359) 0:09:06.721 ********** 2026-03-23 00:56:09.827633 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.827636 | orchestrator | 2026-03-23 00:56:09.827639 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-23 00:56:09.827645 | orchestrator | Monday 23 March 2026 00:54:52 +0000 (0:00:00.614) 0:09:07.335 ********** 2026-03-23 00:56:09.827648 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.827651 | orchestrator | 2026-03-23 00:56:09.827654 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-23 00:56:09.827657 | orchestrator | Monday 23 March 2026 00:54:53 +0000 (0:00:01.030) 0:09:08.366 ********** 2026-03-23 00:56:09.827660 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.827663 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.827666 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.827669 | orchestrator | 2026-03-23 00:56:09.827672 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-23 00:56:09.827675 | orchestrator | Monday 23 March 2026 00:54:54 +0000 (0:00:01.307) 0:09:09.673 ********** 2026-03-23 00:56:09.827678 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.827682 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.827685 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.827688 | orchestrator | 2026-03-23 00:56:09.827691 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-23 00:56:09.827694 | orchestrator | Monday 23 March 2026 00:54:55 +0000 (0:00:01.087) 0:09:10.761 ********** 2026-03-23 00:56:09.827697 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.827700 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.827703 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.827706 | orchestrator | 2026-03-23 00:56:09.827709 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-23 00:56:09.827712 | orchestrator | Monday 23 March 2026 00:54:57 +0000 (0:00:01.947) 0:09:12.709 ********** 2026-03-23 00:56:09.827715 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.827719 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.827722 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.827727 | orchestrator | 2026-03-23 00:56:09.827730 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-23 00:56:09.827733 | orchestrator | Monday 23 March 2026 00:54:59 +0000 (0:00:01.875) 0:09:14.585 ********** 2026-03-23 00:56:09.827736 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827739 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827743 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827746 | orchestrator | 2026-03-23 00:56:09.827750 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-23 00:56:09.827753 | orchestrator | Monday 23 March 2026 00:55:01 +0000 (0:00:01.766) 0:09:16.351 ********** 2026-03-23 00:56:09.827757 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.827760 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.827763 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.827766 | orchestrator | 2026-03-23 00:56:09.827769 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-23 00:56:09.827772 | orchestrator | Monday 23 March 2026 00:55:02 +0000 (0:00:00.725) 0:09:17.077 ********** 2026-03-23 00:56:09.827775 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.827778 | orchestrator | 2026-03-23 00:56:09.827781 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-23 00:56:09.827784 | orchestrator | Monday 23 March 2026 00:55:02 +0000 (0:00:00.548) 0:09:17.625 ********** 2026-03-23 00:56:09.827787 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827790 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827793 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827796 | orchestrator | 2026-03-23 00:56:09.827800 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-23 00:56:09.827803 | orchestrator | Monday 23 March 2026 00:55:03 +0000 (0:00:00.559) 0:09:18.185 ********** 2026-03-23 00:56:09.827806 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.827809 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.827812 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.827815 | orchestrator | 2026-03-23 00:56:09.827818 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-23 00:56:09.827821 | orchestrator | Monday 23 March 2026 00:55:04 +0000 (0:00:01.356) 0:09:19.541 ********** 2026-03-23 00:56:09.827824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.827827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.827830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.827834 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827837 | orchestrator | 2026-03-23 00:56:09.827840 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-23 00:56:09.827843 | orchestrator | Monday 23 March 2026 00:55:05 +0000 (0:00:00.656) 0:09:20.197 ********** 2026-03-23 00:56:09.827846 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827849 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827852 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827855 | orchestrator | 2026-03-23 00:56:09.827858 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-23 00:56:09.827861 | orchestrator | 2026-03-23 00:56:09.827864 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-23 00:56:09.827867 | orchestrator | Monday 23 March 2026 00:55:06 +0000 (0:00:00.643) 0:09:20.841 ********** 2026-03-23 00:56:09.827871 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.827874 | orchestrator | 2026-03-23 00:56:09.827877 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-23 00:56:09.827880 | orchestrator | Monday 23 March 2026 00:55:06 +0000 (0:00:00.800) 0:09:21.642 ********** 2026-03-23 00:56:09.827883 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.827888 | orchestrator | 2026-03-23 00:56:09.827891 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-23 00:56:09.827894 | orchestrator | Monday 23 March 2026 00:55:07 +0000 (0:00:00.537) 0:09:22.179 ********** 2026-03-23 00:56:09.827899 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827903 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.827906 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.827909 | orchestrator | 2026-03-23 00:56:09.827912 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-23 00:56:09.827915 | orchestrator | Monday 23 March 2026 00:55:07 +0000 (0:00:00.579) 0:09:22.759 ********** 2026-03-23 00:56:09.827918 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827921 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827924 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827927 | orchestrator | 2026-03-23 00:56:09.827930 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-23 00:56:09.827933 | orchestrator | Monday 23 March 2026 00:55:08 +0000 (0:00:00.705) 0:09:23.464 ********** 2026-03-23 00:56:09.827936 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827940 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827943 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827946 | orchestrator | 2026-03-23 00:56:09.827949 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-23 00:56:09.827952 | orchestrator | Monday 23 March 2026 00:55:09 +0000 (0:00:00.698) 0:09:24.162 ********** 2026-03-23 00:56:09.827955 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.827958 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.827961 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.827964 | orchestrator | 2026-03-23 00:56:09.827967 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-23 00:56:09.827970 | orchestrator | Monday 23 March 2026 00:55:10 +0000 (0:00:00.694) 0:09:24.857 ********** 2026-03-23 00:56:09.827973 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827976 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.827980 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.827983 | orchestrator | 2026-03-23 00:56:09.827986 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-23 00:56:09.827989 | orchestrator | Monday 23 March 2026 00:55:10 +0000 (0:00:00.594) 0:09:25.451 ********** 2026-03-23 00:56:09.827992 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.827995 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.827998 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.828001 | orchestrator | 2026-03-23 00:56:09.828004 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-23 00:56:09.828009 | orchestrator | Monday 23 March 2026 00:55:11 +0000 (0:00:00.326) 0:09:25.778 ********** 2026-03-23 00:56:09.828012 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.828015 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.828018 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.828021 | orchestrator | 2026-03-23 00:56:09.828025 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-23 00:56:09.828028 | orchestrator | Monday 23 March 2026 00:55:11 +0000 (0:00:00.316) 0:09:26.095 ********** 2026-03-23 00:56:09.828031 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.828034 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.828037 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.828040 | orchestrator | 2026-03-23 00:56:09.828043 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-23 00:56:09.828046 | orchestrator | Monday 23 March 2026 00:55:12 +0000 (0:00:00.687) 0:09:26.782 ********** 2026-03-23 00:56:09.828049 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.828052 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.828055 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.828058 | orchestrator | 2026-03-23 00:56:09.828063 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-23 00:56:09.828067 | orchestrator | Monday 23 March 2026 00:55:12 +0000 (0:00:00.955) 0:09:27.737 ********** 2026-03-23 00:56:09.828070 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.828073 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.828076 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.828088 | orchestrator | 2026-03-23 00:56:09.828092 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-23 00:56:09.828095 | orchestrator | Monday 23 March 2026 00:55:13 +0000 (0:00:00.347) 0:09:28.085 ********** 2026-03-23 00:56:09.828098 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.828101 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.828104 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.828108 | orchestrator | 2026-03-23 00:56:09.828111 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-23 00:56:09.828114 | orchestrator | Monday 23 March 2026 00:55:13 +0000 (0:00:00.290) 0:09:28.376 ********** 2026-03-23 00:56:09.828117 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.828120 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.828123 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.828126 | orchestrator | 2026-03-23 00:56:09.828129 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-23 00:56:09.828132 | orchestrator | Monday 23 March 2026 00:55:13 +0000 (0:00:00.311) 0:09:28.687 ********** 2026-03-23 00:56:09.828135 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.828138 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.828141 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.828144 | orchestrator | 2026-03-23 00:56:09.828147 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-23 00:56:09.828150 | orchestrator | Monday 23 March 2026 00:55:14 +0000 (0:00:00.539) 0:09:29.227 ********** 2026-03-23 00:56:09.828153 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.828194 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.828198 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.828201 | orchestrator | 2026-03-23 00:56:09.828204 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-23 00:56:09.828207 | orchestrator | Monday 23 March 2026 00:55:14 +0000 (0:00:00.303) 0:09:29.531 ********** 2026-03-23 00:56:09.828210 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.828213 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.828217 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.828220 | orchestrator | 2026-03-23 00:56:09.828224 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-23 00:56:09.828230 | orchestrator | Monday 23 March 2026 00:55:15 +0000 (0:00:00.314) 0:09:29.845 ********** 2026-03-23 00:56:09.828235 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.828258 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.828268 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.828273 | orchestrator | 2026-03-23 00:56:09.828278 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-23 00:56:09.828283 | orchestrator | Monday 23 March 2026 00:55:15 +0000 (0:00:00.305) 0:09:30.151 ********** 2026-03-23 00:56:09.828288 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.828293 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.828298 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.828303 | orchestrator | 2026-03-23 00:56:09.828308 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-23 00:56:09.828313 | orchestrator | Monday 23 March 2026 00:55:15 +0000 (0:00:00.517) 0:09:30.668 ********** 2026-03-23 00:56:09.828319 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.828324 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.828329 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.828334 | orchestrator | 2026-03-23 00:56:09.828338 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-23 00:56:09.828348 | orchestrator | Monday 23 March 2026 00:55:16 +0000 (0:00:00.329) 0:09:30.997 ********** 2026-03-23 00:56:09.828353 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.828358 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.828363 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.828368 | orchestrator | 2026-03-23 00:56:09.828373 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-23 00:56:09.828378 | orchestrator | Monday 23 March 2026 00:55:16 +0000 (0:00:00.506) 0:09:31.504 ********** 2026-03-23 00:56:09.828383 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.828388 | orchestrator | 2026-03-23 00:56:09.828394 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-23 00:56:09.828399 | orchestrator | Monday 23 March 2026 00:55:17 +0000 (0:00:00.759) 0:09:32.263 ********** 2026-03-23 00:56:09.828404 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:56:09.828409 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-23 00:56:09.828412 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-23 00:56:09.828415 | orchestrator | 2026-03-23 00:56:09.828422 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-23 00:56:09.828425 | orchestrator | Monday 23 March 2026 00:55:19 +0000 (0:00:01.761) 0:09:34.025 ********** 2026-03-23 00:56:09.828428 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-23 00:56:09.828431 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-23 00:56:09.828434 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.828438 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-23 00:56:09.828441 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-23 00:56:09.828444 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.828447 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-23 00:56:09.828450 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-23 00:56:09.828453 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.828456 | orchestrator | 2026-03-23 00:56:09.828459 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-23 00:56:09.828462 | orchestrator | Monday 23 March 2026 00:55:20 +0000 (0:00:01.372) 0:09:35.398 ********** 2026-03-23 00:56:09.828465 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.828468 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.828471 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.828474 | orchestrator | 2026-03-23 00:56:09.828477 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-23 00:56:09.828480 | orchestrator | Monday 23 March 2026 00:55:20 +0000 (0:00:00.338) 0:09:35.736 ********** 2026-03-23 00:56:09.828484 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.828487 | orchestrator | 2026-03-23 00:56:09.828490 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-23 00:56:09.828493 | orchestrator | Monday 23 March 2026 00:55:21 +0000 (0:00:00.742) 0:09:36.479 ********** 2026-03-23 00:56:09.828496 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.828500 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.828503 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.828506 | orchestrator | 2026-03-23 00:56:09.828509 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-23 00:56:09.828512 | orchestrator | Monday 23 March 2026 00:55:22 +0000 (0:00:00.935) 0:09:37.414 ********** 2026-03-23 00:56:09.828518 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:56:09.828521 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-23 00:56:09.828524 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:56:09.828527 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-23 00:56:09.828530 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:56:09.828553 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-23 00:56:09.828557 | orchestrator | 2026-03-23 00:56:09.828560 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-23 00:56:09.828563 | orchestrator | Monday 23 March 2026 00:55:25 +0000 (0:00:03.299) 0:09:40.713 ********** 2026-03-23 00:56:09.828566 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:56:09.828569 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-23 00:56:09.828572 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:56:09.828576 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-23 00:56:09.828579 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:56:09.828582 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-23 00:56:09.828585 | orchestrator | 2026-03-23 00:56:09.828588 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-23 00:56:09.828591 | orchestrator | Monday 23 March 2026 00:55:28 +0000 (0:00:02.182) 0:09:42.896 ********** 2026-03-23 00:56:09.828594 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-23 00:56:09.828597 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.828600 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-23 00:56:09.828603 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.828607 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-23 00:56:09.828610 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.828613 | orchestrator | 2026-03-23 00:56:09.828616 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-23 00:56:09.828619 | orchestrator | Monday 23 March 2026 00:55:29 +0000 (0:00:01.337) 0:09:44.233 ********** 2026-03-23 00:56:09.828622 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-23 00:56:09.828625 | orchestrator | 2026-03-23 00:56:09.828628 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-23 00:56:09.828631 | orchestrator | Monday 23 March 2026 00:55:29 +0000 (0:00:00.222) 0:09:44.455 ********** 2026-03-23 00:56:09.828637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-23 00:56:09.828641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-23 00:56:09.828644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-23 00:56:09.828647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-23 00:56:09.828662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-23 00:56:09.828665 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.828668 | orchestrator | 2026-03-23 00:56:09.828671 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-23 00:56:09.828677 | orchestrator | Monday 23 March 2026 00:55:30 +0000 (0:00:00.608) 0:09:45.064 ********** 2026-03-23 00:56:09.828680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-23 00:56:09.828683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-23 00:56:09.828686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-23 00:56:09.828690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-23 00:56:09.828693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-23 00:56:09.828696 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.828699 | orchestrator | 2026-03-23 00:56:09.828702 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-23 00:56:09.828705 | orchestrator | Monday 23 March 2026 00:55:31 +0000 (0:00:00.708) 0:09:45.772 ********** 2026-03-23 00:56:09.828708 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-23 00:56:09.828711 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-23 00:56:09.828715 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-23 00:56:09.828718 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-23 00:56:09.828723 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-23 00:56:09.828726 | orchestrator | 2026-03-23 00:56:09.828729 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-23 00:56:09.828733 | orchestrator | Monday 23 March 2026 00:55:54 +0000 (0:00:23.259) 0:10:09.032 ********** 2026-03-23 00:56:09.828736 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.828739 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.828742 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.828745 | orchestrator | 2026-03-23 00:56:09.828748 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-23 00:56:09.828751 | orchestrator | Monday 23 March 2026 00:55:54 +0000 (0:00:00.581) 0:10:09.614 ********** 2026-03-23 00:56:09.828754 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.828757 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.828760 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.828763 | orchestrator | 2026-03-23 00:56:09.828767 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-23 00:56:09.828770 | orchestrator | Monday 23 March 2026 00:55:55 +0000 (0:00:00.330) 0:10:09.945 ********** 2026-03-23 00:56:09.828773 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.828776 | orchestrator | 2026-03-23 00:56:09.828779 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-23 00:56:09.828782 | orchestrator | Monday 23 March 2026 00:55:55 +0000 (0:00:00.598) 0:10:10.543 ********** 2026-03-23 00:56:09.828785 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.828788 | orchestrator | 2026-03-23 00:56:09.828792 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-23 00:56:09.828832 | orchestrator | Monday 23 March 2026 00:55:56 +0000 (0:00:00.946) 0:10:11.490 ********** 2026-03-23 00:56:09.828840 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.828845 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.828850 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.828856 | orchestrator | 2026-03-23 00:56:09.828861 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-23 00:56:09.828870 | orchestrator | Monday 23 March 2026 00:55:57 +0000 (0:00:01.254) 0:10:12.744 ********** 2026-03-23 00:56:09.828875 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.828880 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.828885 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.828891 | orchestrator | 2026-03-23 00:56:09.828896 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-23 00:56:09.828901 | orchestrator | Monday 23 March 2026 00:55:59 +0000 (0:00:01.258) 0:10:14.003 ********** 2026-03-23 00:56:09.828906 | orchestrator | changed: [testbed-node-4] 2026-03-23 00:56:09.828911 | orchestrator | changed: [testbed-node-3] 2026-03-23 00:56:09.828916 | orchestrator | changed: [testbed-node-5] 2026-03-23 00:56:09.828921 | orchestrator | 2026-03-23 00:56:09.828926 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-23 00:56:09.828931 | orchestrator | Monday 23 March 2026 00:56:00 +0000 (0:00:01.763) 0:10:15.767 ********** 2026-03-23 00:56:09.828937 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.828942 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.828947 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-23 00:56:09.828952 | orchestrator | 2026-03-23 00:56:09.828957 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-23 00:56:09.828962 | orchestrator | Monday 23 March 2026 00:56:03 +0000 (0:00:02.430) 0:10:18.197 ********** 2026-03-23 00:56:09.828967 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.828970 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.828973 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.828976 | orchestrator | 2026-03-23 00:56:09.828979 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-23 00:56:09.828983 | orchestrator | Monday 23 March 2026 00:56:03 +0000 (0:00:00.343) 0:10:18.541 ********** 2026-03-23 00:56:09.828986 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:56:09.828989 | orchestrator | 2026-03-23 00:56:09.828992 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-23 00:56:09.828995 | orchestrator | Monday 23 March 2026 00:56:04 +0000 (0:00:00.782) 0:10:19.324 ********** 2026-03-23 00:56:09.828998 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.829001 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.829004 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.829007 | orchestrator | 2026-03-23 00:56:09.829010 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-23 00:56:09.829013 | orchestrator | Monday 23 March 2026 00:56:04 +0000 (0:00:00.330) 0:10:19.654 ********** 2026-03-23 00:56:09.829017 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.829020 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:56:09.829023 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:56:09.829026 | orchestrator | 2026-03-23 00:56:09.829029 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-23 00:56:09.829032 | orchestrator | Monday 23 March 2026 00:56:05 +0000 (0:00:00.306) 0:10:19.961 ********** 2026-03-23 00:56:09.829035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:56:09.829038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:56:09.829045 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:56:09.829048 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:56:09.829051 | orchestrator | 2026-03-23 00:56:09.829056 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-23 00:56:09.829059 | orchestrator | Monday 23 March 2026 00:56:06 +0000 (0:00:01.439) 0:10:21.400 ********** 2026-03-23 00:56:09.829062 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:56:09.829066 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:56:09.829069 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:56:09.829072 | orchestrator | 2026-03-23 00:56:09.829103 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:56:09.829107 | orchestrator | testbed-node-0 : ok=134  changed=34  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-23 00:56:09.829111 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-23 00:56:09.829114 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-23 00:56:09.829117 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-23 00:56:09.829121 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-23 00:56:09.829124 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-23 00:56:09.829127 | orchestrator | 2026-03-23 00:56:09.829130 | orchestrator | 2026-03-23 00:56:09.829133 | orchestrator | 2026-03-23 00:56:09.829136 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:56:09.829139 | orchestrator | Monday 23 March 2026 00:56:06 +0000 (0:00:00.258) 0:10:21.659 ********** 2026-03-23 00:56:09.829142 | orchestrator | =============================================================================== 2026-03-23 00:56:09.829149 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 87.82s 2026-03-23 00:56:09.829152 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 31.64s 2026-03-23 00:56:09.829155 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 23.26s 2026-03-23 00:56:09.829158 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.50s 2026-03-23 00:56:09.829161 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.34s 2026-03-23 00:56:09.829164 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.81s 2026-03-23 00:56:09.829167 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.25s 2026-03-23 00:56:09.829171 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 7.80s 2026-03-23 00:56:09.829174 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.68s 2026-03-23 00:56:09.829177 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.24s 2026-03-23 00:56:09.829180 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 5.93s 2026-03-23 00:56:09.829183 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 5.45s 2026-03-23 00:56:09.829186 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.49s 2026-03-23 00:56:09.829189 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.16s 2026-03-23 00:56:09.829192 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.96s 2026-03-23 00:56:09.829195 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.84s 2026-03-23 00:56:09.829198 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.77s 2026-03-23 00:56:09.829204 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.71s 2026-03-23 00:56:09.829207 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.37s 2026-03-23 00:56:09.829210 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.30s 2026-03-23 00:56:09.829214 | orchestrator | 2026-03-23 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:12.871390 | orchestrator | 2026-03-23 00:56:12 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:12.872638 | orchestrator | 2026-03-23 00:56:12 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:12.874250 | orchestrator | 2026-03-23 00:56:12 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:12.874566 | orchestrator | 2026-03-23 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:15.921145 | orchestrator | 2026-03-23 00:56:15 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:15.923561 | orchestrator | 2026-03-23 00:56:15 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:15.925475 | orchestrator | 2026-03-23 00:56:15 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:15.925595 | orchestrator | 2026-03-23 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:18.969804 | orchestrator | 2026-03-23 00:56:18 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:18.972132 | orchestrator | 2026-03-23 00:56:18 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:18.975928 | orchestrator | 2026-03-23 00:56:18 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:18.975973 | orchestrator | 2026-03-23 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:22.017743 | orchestrator | 2026-03-23 00:56:22 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:22.020411 | orchestrator | 2026-03-23 00:56:22 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:22.023040 | orchestrator | 2026-03-23 00:56:22 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:22.023357 | orchestrator | 2026-03-23 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:25.063254 | orchestrator | 2026-03-23 00:56:25 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:25.065207 | orchestrator | 2026-03-23 00:56:25 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:25.067245 | orchestrator | 2026-03-23 00:56:25 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:25.067299 | orchestrator | 2026-03-23 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:28.107359 | orchestrator | 2026-03-23 00:56:28 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:28.110425 | orchestrator | 2026-03-23 00:56:28 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:28.113592 | orchestrator | 2026-03-23 00:56:28 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:28.113674 | orchestrator | 2026-03-23 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:31.149208 | orchestrator | 2026-03-23 00:56:31 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:31.150505 | orchestrator | 2026-03-23 00:56:31 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:31.153173 | orchestrator | 2026-03-23 00:56:31 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:31.153225 | orchestrator | 2026-03-23 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:34.202730 | orchestrator | 2026-03-23 00:56:34 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:34.203633 | orchestrator | 2026-03-23 00:56:34 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:34.203740 | orchestrator | 2026-03-23 00:56:34 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:34.203996 | orchestrator | 2026-03-23 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:37.241187 | orchestrator | 2026-03-23 00:56:37 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:37.243903 | orchestrator | 2026-03-23 00:56:37 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:37.245673 | orchestrator | 2026-03-23 00:56:37 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:37.245906 | orchestrator | 2026-03-23 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:40.304650 | orchestrator | 2026-03-23 00:56:40 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:40.305915 | orchestrator | 2026-03-23 00:56:40 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:40.308271 | orchestrator | 2026-03-23 00:56:40 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:40.308322 | orchestrator | 2026-03-23 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:43.357802 | orchestrator | 2026-03-23 00:56:43 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:43.360484 | orchestrator | 2026-03-23 00:56:43 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:43.361896 | orchestrator | 2026-03-23 00:56:43 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:43.361959 | orchestrator | 2026-03-23 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:46.409427 | orchestrator | 2026-03-23 00:56:46 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:46.410781 | orchestrator | 2026-03-23 00:56:46 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:46.412474 | orchestrator | 2026-03-23 00:56:46 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:46.412507 | orchestrator | 2026-03-23 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:49.458632 | orchestrator | 2026-03-23 00:56:49 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:49.460803 | orchestrator | 2026-03-23 00:56:49 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:49.463136 | orchestrator | 2026-03-23 00:56:49 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:49.463197 | orchestrator | 2026-03-23 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:52.504230 | orchestrator | 2026-03-23 00:56:52 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:52.505204 | orchestrator | 2026-03-23 00:56:52 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:52.506437 | orchestrator | 2026-03-23 00:56:52 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:52.506597 | orchestrator | 2026-03-23 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:55.553911 | orchestrator | 2026-03-23 00:56:55 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:55.555336 | orchestrator | 2026-03-23 00:56:55 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:55.558287 | orchestrator | 2026-03-23 00:56:55 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:55.558345 | orchestrator | 2026-03-23 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:56:58.595654 | orchestrator | 2026-03-23 00:56:58 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:56:58.596082 | orchestrator | 2026-03-23 00:56:58 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:56:58.596929 | orchestrator | 2026-03-23 00:56:58 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:56:58.596959 | orchestrator | 2026-03-23 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:01.649230 | orchestrator | 2026-03-23 00:57:01 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:57:01.651353 | orchestrator | 2026-03-23 00:57:01 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:01.653742 | orchestrator | 2026-03-23 00:57:01 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:57:01.653798 | orchestrator | 2026-03-23 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:04.693939 | orchestrator | 2026-03-23 00:57:04 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:57:04.695715 | orchestrator | 2026-03-23 00:57:04 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:04.697689 | orchestrator | 2026-03-23 00:57:04 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state STARTED 2026-03-23 00:57:04.697737 | orchestrator | 2026-03-23 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:07.747324 | orchestrator | 2026-03-23 00:57:07 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:57:07.748820 | orchestrator | 2026-03-23 00:57:07 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:07.750777 | orchestrator | 2026-03-23 00:57:07 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:07.752582 | orchestrator | 2026-03-23 00:57:07 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:07.756135 | orchestrator | 2026-03-23 00:57:07 | INFO  | Task 63a482a7-5492-4304-94f2-6fad2464c98f is in state SUCCESS 2026-03-23 00:57:07.756741 | orchestrator | 2026-03-23 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:07.758343 | orchestrator | 2026-03-23 00:57:07.758386 | orchestrator | 2026-03-23 00:57:07.758396 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-23 00:57:07.758460 | orchestrator | 2026-03-23 00:57:07.758469 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-23 00:57:07.758518 | orchestrator | Monday 23 March 2026 00:54:15 +0000 (0:00:00.099) 0:00:00.099 ********** 2026-03-23 00:57:07.758527 | orchestrator | ok: [localhost] => { 2026-03-23 00:57:07.758536 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-23 00:57:07.758558 | orchestrator | } 2026-03-23 00:57:07.758566 | orchestrator | 2026-03-23 00:57:07.758573 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-23 00:57:07.758580 | orchestrator | Monday 23 March 2026 00:54:15 +0000 (0:00:00.049) 0:00:00.149 ********** 2026-03-23 00:57:07.758587 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-23 00:57:07.758596 | orchestrator | ...ignoring 2026-03-23 00:57:07.758705 | orchestrator | 2026-03-23 00:57:07.758716 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-23 00:57:07.758722 | orchestrator | Monday 23 March 2026 00:54:18 +0000 (0:00:02.902) 0:00:03.051 ********** 2026-03-23 00:57:07.758729 | orchestrator | skipping: [localhost] 2026-03-23 00:57:07.758734 | orchestrator | 2026-03-23 00:57:07.758741 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-23 00:57:07.758746 | orchestrator | Monday 23 March 2026 00:54:18 +0000 (0:00:00.069) 0:00:03.121 ********** 2026-03-23 00:57:07.758753 | orchestrator | ok: [localhost] 2026-03-23 00:57:07.758759 | orchestrator | 2026-03-23 00:57:07.758765 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 00:57:07.758771 | orchestrator | 2026-03-23 00:57:07.758776 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 00:57:07.758782 | orchestrator | Monday 23 March 2026 00:54:19 +0000 (0:00:00.222) 0:00:03.344 ********** 2026-03-23 00:57:07.758788 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:07.758793 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:57:07.758799 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:57:07.758804 | orchestrator | 2026-03-23 00:57:07.758810 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 00:57:07.758815 | orchestrator | Monday 23 March 2026 00:54:19 +0000 (0:00:00.333) 0:00:03.678 ********** 2026-03-23 00:57:07.758821 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-23 00:57:07.758827 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-23 00:57:07.758833 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-23 00:57:07.758838 | orchestrator | 2026-03-23 00:57:07.758844 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-23 00:57:07.758850 | orchestrator | 2026-03-23 00:57:07.758856 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-23 00:57:07.758861 | orchestrator | Monday 23 March 2026 00:54:19 +0000 (0:00:00.555) 0:00:04.233 ********** 2026-03-23 00:57:07.758867 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-23 00:57:07.758873 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-23 00:57:07.758879 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-23 00:57:07.758884 | orchestrator | 2026-03-23 00:57:07.758890 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-23 00:57:07.758896 | orchestrator | Monday 23 March 2026 00:54:20 +0000 (0:00:00.395) 0:00:04.629 ********** 2026-03-23 00:57:07.758902 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:57:07.758908 | orchestrator | 2026-03-23 00:57:07.758914 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-23 00:57:07.758920 | orchestrator | Monday 23 March 2026 00:54:20 +0000 (0:00:00.618) 0:00:05.248 ********** 2026-03-23 00:57:07.758944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-23 00:57:07.758960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-23 00:57:07.758967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-23 00:57:07.758978 | orchestrator | 2026-03-23 00:57:07.758989 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-23 00:57:07.759043 | orchestrator | Monday 23 March 2026 00:54:24 +0000 (0:00:03.214) 0:00:08.462 ********** 2026-03-23 00:57:07.759049 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.759055 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.759064 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:07.759070 | orchestrator | 2026-03-23 00:57:07.759076 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-23 00:57:07.759081 | orchestrator | Monday 23 March 2026 00:54:24 +0000 (0:00:00.736) 0:00:09.199 ********** 2026-03-23 00:57:07.759087 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.759093 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.759099 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:07.759105 | orchestrator | 2026-03-23 00:57:07.759111 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-23 00:57:07.759116 | orchestrator | Monday 23 March 2026 00:54:26 +0000 (0:00:01.618) 0:00:10.818 ********** 2026-03-23 00:57:07.759124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-23 00:57:07.759135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-23 00:57:07.759148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-23 00:57:07.759156 | orchestrator | 2026-03-23 00:57:07.759162 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-23 00:57:07.759168 | orchestrator | Monday 23 March 2026 00:54:30 +0000 (0:00:04.186) 0:00:15.004 ********** 2026-03-23 00:57:07.759175 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.759181 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.759187 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:07.759193 | orchestrator | 2026-03-23 00:57:07.759199 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-23 00:57:07.759209 | orchestrator | Monday 23 March 2026 00:54:31 +0000 (0:00:01.035) 0:00:16.040 ********** 2026-03-23 00:57:07.759216 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:57:07.759222 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:07.759229 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:57:07.759234 | orchestrator | 2026-03-23 00:57:07.759241 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-23 00:57:07.759247 | orchestrator | Monday 23 March 2026 00:54:35 +0000 (0:00:03.662) 0:00:19.702 ********** 2026-03-23 00:57:07.759253 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:57:07.759260 | orchestrator | 2026-03-23 00:57:07.759267 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-23 00:57:07.759274 | orchestrator | Monday 23 March 2026 00:54:35 +0000 (0:00:00.455) 0:00:20.158 ********** 2026-03-23 00:57:07.759290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:57:07.759298 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.759305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:57:07.759317 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.759329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:57:07.759336 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:07.759344 | orchestrator | 2026-03-23 00:57:07.759352 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-23 00:57:07.759360 | orchestrator | Monday 23 March 2026 00:54:38 +0000 (0:00:02.776) 0:00:22.934 ********** 2026-03-23 00:57:07.759368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:57:07.759380 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.759392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:57:07.759400 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.759417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:57:07.759430 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:07.759439 | orchestrator | 2026-03-23 00:57:07.759446 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-23 00:57:07.759453 | orchestrator | Monday 23 March 2026 00:54:40 +0000 (0:00:02.112) 0:00:25.047 ********** 2026-03-23 00:57:07.759460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:57:07.759468 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.759490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:57:07.759502 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.759509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-23 00:57:07.759516 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:07.759522 | orchestrator | 2026-03-23 00:57:07.759529 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-23 00:57:07.759535 | orchestrator | Monday 23 March 2026 00:54:43 +0000 (0:00:02.683) 0:00:27.731 ********** 2026-03-23 00:57:07.759549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-23 00:57:07.759559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-23 00:57:07.759571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-23 00:57:07.759577 | orchestrator | 2026-03-23 00:57:07.759583 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-23 00:57:07.759593 | orchestrator | Monday 23 March 2026 00:54:46 +0000 (0:00:03.034) 0:00:30.765 ********** 2026-03-23 00:57:07.759600 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:07.759606 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:57:07.759634 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:57:07.759642 | orchestrator | 2026-03-23 00:57:07.759648 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-23 00:57:07.759655 | orchestrator | Monday 23 March 2026 00:54:47 +0000 (0:00:00.811) 0:00:31.577 ********** 2026-03-23 00:57:07.759661 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:07.759668 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:57:07.759674 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:57:07.759681 | orchestrator | 2026-03-23 00:57:07.759687 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-23 00:57:07.759694 | orchestrator | Monday 23 March 2026 00:54:47 +0000 (0:00:00.337) 0:00:31.915 ********** 2026-03-23 00:57:07.759701 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:07.759707 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:57:07.759713 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:57:07.759735 | orchestrator | 2026-03-23 00:57:07.759742 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-23 00:57:07.759748 | orchestrator | Monday 23 March 2026 00:54:47 +0000 (0:00:00.319) 0:00:32.235 ********** 2026-03-23 00:57:07.759754 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-23 00:57:07.759761 | orchestrator | ...ignoring 2026-03-23 00:57:07.759767 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-23 00:57:07.759772 | orchestrator | ...ignoring 2026-03-23 00:57:07.759778 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-23 00:57:07.759784 | orchestrator | ...ignoring 2026-03-23 00:57:07.759790 | orchestrator | 2026-03-23 00:57:07.759796 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-23 00:57:07.759802 | orchestrator | Monday 23 March 2026 00:54:58 +0000 (0:00:11.024) 0:00:43.259 ********** 2026-03-23 00:57:07.759807 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:07.759813 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:57:07.759820 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:57:07.759825 | orchestrator | 2026-03-23 00:57:07.759831 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-23 00:57:07.759837 | orchestrator | Monday 23 March 2026 00:54:59 +0000 (0:00:00.399) 0:00:43.659 ********** 2026-03-23 00:57:07.759843 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:07.759848 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.759854 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.759860 | orchestrator | 2026-03-23 00:57:07.759866 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-23 00:57:07.759872 | orchestrator | Monday 23 March 2026 00:54:59 +0000 (0:00:00.481) 0:00:44.141 ********** 2026-03-23 00:57:07.759878 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:07.759884 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.759890 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.759896 | orchestrator | 2026-03-23 00:57:07.759902 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-23 00:57:07.759908 | orchestrator | Monday 23 March 2026 00:55:00 +0000 (0:00:00.463) 0:00:44.605 ********** 2026-03-23 00:57:07.759914 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:07.759920 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.759927 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.759933 | orchestrator | 2026-03-23 00:57:07.759939 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-23 00:57:07.759950 | orchestrator | Monday 23 March 2026 00:55:00 +0000 (0:00:00.662) 0:00:45.267 ********** 2026-03-23 00:57:07.759957 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:07.759963 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:57:07.759969 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:57:07.759975 | orchestrator | 2026-03-23 00:57:07.759981 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-23 00:57:07.759986 | orchestrator | Monday 23 March 2026 00:55:01 +0000 (0:00:00.430) 0:00:45.698 ********** 2026-03-23 00:57:07.760010 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:07.760017 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.760022 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.760028 | orchestrator | 2026-03-23 00:57:07.760034 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-23 00:57:07.760043 | orchestrator | Monday 23 March 2026 00:55:01 +0000 (0:00:00.432) 0:00:46.130 ********** 2026-03-23 00:57:07.760050 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.760056 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.760062 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-23 00:57:07.760069 | orchestrator | 2026-03-23 00:57:07.760076 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-23 00:57:07.760082 | orchestrator | Monday 23 March 2026 00:55:02 +0000 (0:00:00.397) 0:00:46.528 ********** 2026-03-23 00:57:07.760088 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:07.760094 | orchestrator | 2026-03-23 00:57:07.760099 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-23 00:57:07.760105 | orchestrator | Monday 23 March 2026 00:55:12 +0000 (0:00:09.920) 0:00:56.448 ********** 2026-03-23 00:57:07.760111 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:07.760117 | orchestrator | 2026-03-23 00:57:07.760123 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-23 00:57:07.760129 | orchestrator | Monday 23 March 2026 00:55:12 +0000 (0:00:00.264) 0:00:56.713 ********** 2026-03-23 00:57:07.760135 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:07.760140 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.760146 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.760152 | orchestrator | 2026-03-23 00:57:07.760158 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-23 00:57:07.760164 | orchestrator | Monday 23 March 2026 00:55:13 +0000 (0:00:00.771) 0:00:57.484 ********** 2026-03-23 00:57:07.760170 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:07.760175 | orchestrator | 2026-03-23 00:57:07.760181 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-23 00:57:07.760187 | orchestrator | Monday 23 March 2026 00:55:20 +0000 (0:00:07.778) 0:01:05.262 ********** 2026-03-23 00:57:07.760193 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:07.760198 | orchestrator | 2026-03-23 00:57:07.760204 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-23 00:57:07.760210 | orchestrator | Monday 23 March 2026 00:55:22 +0000 (0:00:01.708) 0:01:06.971 ********** 2026-03-23 00:57:07.760216 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:07.760222 | orchestrator | 2026-03-23 00:57:07.760228 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-23 00:57:07.760234 | orchestrator | Monday 23 March 2026 00:55:25 +0000 (0:00:02.515) 0:01:09.486 ********** 2026-03-23 00:57:07.760240 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:07.760246 | orchestrator | 2026-03-23 00:57:07.760252 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-23 00:57:07.760258 | orchestrator | Monday 23 March 2026 00:55:25 +0000 (0:00:00.280) 0:01:09.767 ********** 2026-03-23 00:57:07.760264 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:07.760270 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.760276 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.760282 | orchestrator | 2026-03-23 00:57:07.760287 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-23 00:57:07.760298 | orchestrator | Monday 23 March 2026 00:55:25 +0000 (0:00:00.304) 0:01:10.071 ********** 2026-03-23 00:57:07.760304 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:07.760310 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:57:07.760315 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:57:07.760321 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-23 00:57:07.760327 | orchestrator | 2026-03-23 00:57:07.760332 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-23 00:57:07.760338 | orchestrator | skipping: no hosts matched 2026-03-23 00:57:07.760344 | orchestrator | 2026-03-23 00:57:07.760350 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-23 00:57:07.760356 | orchestrator | 2026-03-23 00:57:07.760363 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-23 00:57:07.760370 | orchestrator | Monday 23 March 2026 00:55:26 +0000 (0:00:00.382) 0:01:10.454 ********** 2026-03-23 00:57:07.760376 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:57:07.760382 | orchestrator | 2026-03-23 00:57:07.760387 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-23 00:57:07.760393 | orchestrator | Monday 23 March 2026 00:55:48 +0000 (0:00:22.558) 0:01:33.012 ********** 2026-03-23 00:57:07.760398 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:57:07.760404 | orchestrator | 2026-03-23 00:57:07.760410 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-23 00:57:07.760416 | orchestrator | Monday 23 March 2026 00:55:59 +0000 (0:00:10.625) 0:01:43.638 ********** 2026-03-23 00:57:07.760421 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:57:07.760426 | orchestrator | 2026-03-23 00:57:07.760432 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-23 00:57:07.760437 | orchestrator | 2026-03-23 00:57:07.760442 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-23 00:57:07.760448 | orchestrator | Monday 23 March 2026 00:56:01 +0000 (0:00:02.477) 0:01:46.115 ********** 2026-03-23 00:57:07.760453 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:57:07.760458 | orchestrator | 2026-03-23 00:57:07.760464 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-23 00:57:07.760469 | orchestrator | Monday 23 March 2026 00:56:18 +0000 (0:00:16.271) 0:02:02.386 ********** 2026-03-23 00:57:07.760475 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-03-23 00:57:07.760481 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:57:07.760487 | orchestrator | 2026-03-23 00:57:07.760494 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-23 00:57:07.760500 | orchestrator | Monday 23 March 2026 00:56:33 +0000 (0:00:15.597) 0:02:17.984 ********** 2026-03-23 00:57:07.760511 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:57:07.760517 | orchestrator | 2026-03-23 00:57:07.760524 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-23 00:57:07.760530 | orchestrator | 2026-03-23 00:57:07.760535 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-23 00:57:07.760544 | orchestrator | Monday 23 March 2026 00:56:35 +0000 (0:00:02.198) 0:02:20.182 ********** 2026-03-23 00:57:07.760549 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:07.760555 | orchestrator | 2026-03-23 00:57:07.760561 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-23 00:57:07.760566 | orchestrator | Monday 23 March 2026 00:56:47 +0000 (0:00:11.527) 0:02:31.710 ********** 2026-03-23 00:57:07.760572 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:07.760579 | orchestrator | 2026-03-23 00:57:07.760585 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-23 00:57:07.760592 | orchestrator | Monday 23 March 2026 00:56:50 +0000 (0:00:03.566) 0:02:35.277 ********** 2026-03-23 00:57:07.760598 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:07.760611 | orchestrator | 2026-03-23 00:57:07.760618 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-23 00:57:07.760624 | orchestrator | 2026-03-23 00:57:07.760630 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-23 00:57:07.760637 | orchestrator | Monday 23 March 2026 00:56:53 +0000 (0:00:02.177) 0:02:37.455 ********** 2026-03-23 00:57:07.760643 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:57:07.760649 | orchestrator | 2026-03-23 00:57:07.760656 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-23 00:57:07.760662 | orchestrator | Monday 23 March 2026 00:56:53 +0000 (0:00:00.592) 0:02:38.047 ********** 2026-03-23 00:57:07.760669 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.760675 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.760681 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:07.760688 | orchestrator | 2026-03-23 00:57:07.760694 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-23 00:57:07.760700 | orchestrator | Monday 23 March 2026 00:56:56 +0000 (0:00:02.365) 0:02:40.412 ********** 2026-03-23 00:57:07.760707 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.760714 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.760721 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:07.760728 | orchestrator | 2026-03-23 00:57:07.760734 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-23 00:57:07.760741 | orchestrator | Monday 23 March 2026 00:56:58 +0000 (0:00:02.454) 0:02:42.866 ********** 2026-03-23 00:57:07.760747 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.760754 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.760760 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:07.760766 | orchestrator | 2026-03-23 00:57:07.760772 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-23 00:57:07.760778 | orchestrator | Monday 23 March 2026 00:57:00 +0000 (0:00:02.167) 0:02:45.034 ********** 2026-03-23 00:57:07.760784 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.760789 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.760795 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:07.760800 | orchestrator | 2026-03-23 00:57:07.760807 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-23 00:57:07.760812 | orchestrator | Monday 23 March 2026 00:57:03 +0000 (0:00:02.391) 0:02:47.425 ********** 2026-03-23 00:57:07.760818 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:07.760824 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:57:07.760830 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:57:07.760836 | orchestrator | 2026-03-23 00:57:07.760841 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-23 00:57:07.760847 | orchestrator | Monday 23 March 2026 00:57:05 +0000 (0:00:02.552) 0:02:49.977 ********** 2026-03-23 00:57:07.760852 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:07.760857 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:07.760863 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:07.760869 | orchestrator | 2026-03-23 00:57:07.760875 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:57:07.760881 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-23 00:57:07.760887 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-23 00:57:07.760894 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-23 00:57:07.760899 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-23 00:57:07.760909 | orchestrator | 2026-03-23 00:57:07.760916 | orchestrator | 2026-03-23 00:57:07.760922 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:57:07.760929 | orchestrator | Monday 23 March 2026 00:57:05 +0000 (0:00:00.218) 0:02:50.196 ********** 2026-03-23 00:57:07.760935 | orchestrator | =============================================================================== 2026-03-23 00:57:07.760941 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.83s 2026-03-23 00:57:07.760948 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.22s 2026-03-23 00:57:07.760954 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.53s 2026-03-23 00:57:07.760961 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.02s 2026-03-23 00:57:07.760967 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.92s 2026-03-23 00:57:07.760978 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.78s 2026-03-23 00:57:07.760985 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.68s 2026-03-23 00:57:07.760992 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.19s 2026-03-23 00:57:07.761019 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.66s 2026-03-23 00:57:07.761026 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 3.57s 2026-03-23 00:57:07.761033 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.22s 2026-03-23 00:57:07.761039 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.03s 2026-03-23 00:57:07.761046 | orchestrator | Check MariaDB service --------------------------------------------------- 2.90s 2026-03-23 00:57:07.761053 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.78s 2026-03-23 00:57:07.761059 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.68s 2026-03-23 00:57:07.761066 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.55s 2026-03-23 00:57:07.761072 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.52s 2026-03-23 00:57:07.761078 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.45s 2026-03-23 00:57:07.761083 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.39s 2026-03-23 00:57:07.761089 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.37s 2026-03-23 00:57:10.806817 | orchestrator | 2026-03-23 00:57:10 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state STARTED 2026-03-23 00:57:10.809499 | orchestrator | 2026-03-23 00:57:10 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:10.810338 | orchestrator | 2026-03-23 00:57:10 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:10.811262 | orchestrator | 2026-03-23 00:57:10 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:10.811357 | orchestrator | 2026-03-23 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:13.849198 | orchestrator | 2026-03-23 00:57:13 | INFO  | Task e3c66c39-411e-425d-96fc-5ba778016c07 is in state SUCCESS 2026-03-23 00:57:13.850170 | orchestrator | 2026-03-23 00:57:13.850194 | orchestrator | 2026-03-23 00:57:13.850199 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 00:57:13.850204 | orchestrator | 2026-03-23 00:57:13.850208 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 00:57:13.850214 | orchestrator | Monday 23 March 2026 00:54:15 +0000 (0:00:00.313) 0:00:00.313 ********** 2026-03-23 00:57:13.850221 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:13.850231 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:57:13.850240 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:57:13.850246 | orchestrator | 2026-03-23 00:57:13.850270 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 00:57:13.850277 | orchestrator | Monday 23 March 2026 00:54:16 +0000 (0:00:00.294) 0:00:00.607 ********** 2026-03-23 00:57:13.850284 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-23 00:57:13.850291 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-23 00:57:13.850298 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-23 00:57:13.850305 | orchestrator | 2026-03-23 00:57:13.850391 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-23 00:57:13.850398 | orchestrator | 2026-03-23 00:57:13.850402 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-23 00:57:13.850407 | orchestrator | Monday 23 March 2026 00:54:16 +0000 (0:00:00.309) 0:00:00.917 ********** 2026-03-23 00:57:13.850411 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:57:13.850416 | orchestrator | 2026-03-23 00:57:13.850420 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-23 00:57:13.850424 | orchestrator | Monday 23 March 2026 00:54:17 +0000 (0:00:00.486) 0:00:01.404 ********** 2026-03-23 00:57:13.850429 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-23 00:57:13.850433 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-23 00:57:13.850437 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-23 00:57:13.850441 | orchestrator | 2026-03-23 00:57:13.850445 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-23 00:57:13.850450 | orchestrator | Monday 23 March 2026 00:54:18 +0000 (0:00:01.193) 0:00:02.597 ********** 2026-03-23 00:57:13.850463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:57:13.850470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:57:13.850481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:57:13.850493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:57:13.850499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:57:13.850506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:57:13.850511 | orchestrator | 2026-03-23 00:57:13.850515 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-23 00:57:13.850520 | orchestrator | Monday 23 March 2026 00:54:19 +0000 (0:00:01.602) 0:00:04.200 ********** 2026-03-23 00:57:13.850524 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:57:13.850528 | orchestrator | 2026-03-23 00:57:13.850532 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-23 00:57:13.850540 | orchestrator | Monday 23 March 2026 00:54:20 +0000 (0:00:00.559) 0:00:04.759 ********** 2026-03-23 00:57:13.850550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:57:13.850557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:57:13.850568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:57:13.850580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:57:13.850592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:57:13.850605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:57:13.850612 | orchestrator | 2026-03-23 00:57:13.850618 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-23 00:57:13.850624 | orchestrator | Monday 23 March 2026 00:54:23 +0000 (0:00:02.815) 0:00:07.575 ********** 2026-03-23 00:57:13.850630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-23 00:57:13.850639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-23 00:57:13.850650 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:13.850658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-23 00:57:13.850670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-23 00:57:13.850677 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:13.850685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-23 00:57:13.850695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-23 00:57:13.850703 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:13.850714 | orchestrator | 2026-03-23 00:57:13.850721 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-23 00:57:13.850728 | orchestrator | Monday 23 March 2026 00:54:23 +0000 (0:00:00.617) 0:00:08.193 ********** 2026-03-23 00:57:13.850736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-23 00:57:13.850747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-23 00:57:13.850753 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:13.850757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-23 00:57:13.850764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-23 00:57:13.850772 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:13.850776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-23 00:57:13.850785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-23 00:57:13.850789 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:13.850793 | orchestrator | 2026-03-23 00:57:13.850798 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-23 00:57:13.850802 | orchestrator | Monday 23 March 2026 00:54:24 +0000 (0:00:00.880) 0:00:09.073 ********** 2026-03-23 00:57:13.850806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:57:13.850817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:57:13.850824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:57:13.850832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:57:13.850837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:57:13.850844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:57:13.850851 | orchestrator | 2026-03-23 00:57:13.850855 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-23 00:57:13.850859 | orchestrator | Monday 23 March 2026 00:54:27 +0000 (0:00:02.844) 0:00:11.918 ********** 2026-03-23 00:57:13.850864 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:57:13.850868 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:13.850872 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:57:13.850876 | orchestrator | 2026-03-23 00:57:13.850880 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-23 00:57:13.850884 | orchestrator | Monday 23 March 2026 00:54:30 +0000 (0:00:03.169) 0:00:15.088 ********** 2026-03-23 00:57:13.850888 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:13.850893 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:57:13.850897 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:57:13.850901 | orchestrator | 2026-03-23 00:57:13.850905 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-23 00:57:13.850909 | orchestrator | Monday 23 March 2026 00:54:32 +0000 (0:00:01.598) 0:00:16.687 ********** 2026-03-23 00:57:13.850916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:57:13.850931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:57:13.850939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-23 00:57:13.850949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:57:13.850963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:57:13.850971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-23 00:57:13.850976 | orchestrator | 2026-03-23 00:57:13.850980 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-23 00:57:13.851020 | orchestrator | Monday 23 March 2026 00:54:34 +0000 (0:00:02.299) 0:00:18.986 ********** 2026-03-23 00:57:13.851025 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:13.851029 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:57:13.851033 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:57:13.851037 | orchestrator | 2026-03-23 00:57:13.851041 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-23 00:57:13.851045 | orchestrator | Monday 23 March 2026 00:54:35 +0000 (0:00:00.374) 0:00:19.360 ********** 2026-03-23 00:57:13.851049 | orchestrator | 2026-03-23 00:57:13.851054 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-23 00:57:13.851058 | orchestrator | Monday 23 March 2026 00:54:35 +0000 (0:00:00.059) 0:00:19.419 ********** 2026-03-23 00:57:13.851062 | orchestrator | 2026-03-23 00:57:13.851066 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-23 00:57:13.851073 | orchestrator | Monday 23 March 2026 00:54:35 +0000 (0:00:00.056) 0:00:19.476 ********** 2026-03-23 00:57:13.851077 | orchestrator | 2026-03-23 00:57:13.851081 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-23 00:57:13.851085 | orchestrator | Monday 23 March 2026 00:54:35 +0000 (0:00:00.059) 0:00:19.536 ********** 2026-03-23 00:57:13.851089 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:13.851094 | orchestrator | 2026-03-23 00:57:13.851098 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-23 00:57:13.851102 | orchestrator | Monday 23 March 2026 00:54:35 +0000 (0:00:00.195) 0:00:19.732 ********** 2026-03-23 00:57:13.851106 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:57:13.851110 | orchestrator | 2026-03-23 00:57:13.851114 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-23 00:57:13.851118 | orchestrator | Monday 23 March 2026 00:54:35 +0000 (0:00:00.169) 0:00:19.901 ********** 2026-03-23 00:57:13.851122 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:13.851126 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:57:13.851130 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:57:13.851134 | orchestrator | 2026-03-23 00:57:13.851139 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-23 00:57:13.851143 | orchestrator | Monday 23 March 2026 00:55:48 +0000 (0:01:13.158) 0:01:33.059 ********** 2026-03-23 00:57:13.851147 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:13.851151 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:57:13.851155 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:57:13.851159 | orchestrator | 2026-03-23 00:57:13.851165 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-23 00:57:13.851170 | orchestrator | Monday 23 March 2026 00:56:58 +0000 (0:01:09.860) 0:02:42.920 ********** 2026-03-23 00:57:13.851174 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:57:13.851178 | orchestrator | 2026-03-23 00:57:13.851183 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-23 00:57:13.851190 | orchestrator | Monday 23 March 2026 00:56:59 +0000 (0:00:00.620) 0:02:43.540 ********** 2026-03-23 00:57:13.851198 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:13.851208 | orchestrator | 2026-03-23 00:57:13.851216 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-23 00:57:13.851222 | orchestrator | Monday 23 March 2026 00:57:01 +0000 (0:00:02.592) 0:02:46.133 ********** 2026-03-23 00:57:13.851229 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:13.851236 | orchestrator | 2026-03-23 00:57:13.851242 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-23 00:57:13.851249 | orchestrator | Monday 23 March 2026 00:57:04 +0000 (0:00:02.221) 0:02:48.355 ********** 2026-03-23 00:57:13.851256 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:57:13.851262 | orchestrator | 2026-03-23 00:57:13.851269 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-23 00:57:13.851275 | orchestrator | Monday 23 March 2026 00:57:06 +0000 (0:00:02.383) 0:02:50.739 ********** 2026-03-23 00:57:13.851282 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:13.851289 | orchestrator | 2026-03-23 00:57:13.851297 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-23 00:57:13.851304 | orchestrator | Monday 23 March 2026 00:57:08 +0000 (0:00:02.595) 0:02:53.334 ********** 2026-03-23 00:57:13.851311 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:57:13.851318 | orchestrator | 2026-03-23 00:57:13.851325 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:57:13.851334 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-23 00:57:13.851342 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-23 00:57:13.851360 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-23 00:57:13.851367 | orchestrator | 2026-03-23 00:57:13.851374 | orchestrator | 2026-03-23 00:57:13.851380 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:57:13.851387 | orchestrator | Monday 23 March 2026 00:57:12 +0000 (0:00:03.306) 0:02:56.641 ********** 2026-03-23 00:57:13.851394 | orchestrator | =============================================================================== 2026-03-23 00:57:13.851400 | orchestrator | opensearch : Restart opensearch container ------------------------------ 73.16s 2026-03-23 00:57:13.851407 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 69.86s 2026-03-23 00:57:13.851414 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.31s 2026-03-23 00:57:13.851421 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.17s 2026-03-23 00:57:13.851428 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.84s 2026-03-23 00:57:13.851434 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.82s 2026-03-23 00:57:13.851441 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.60s 2026-03-23 00:57:13.851447 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.59s 2026-03-23 00:57:13.851454 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.38s 2026-03-23 00:57:13.851461 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.30s 2026-03-23 00:57:13.851467 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.22s 2026-03-23 00:57:13.851474 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.60s 2026-03-23 00:57:13.851481 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.60s 2026-03-23 00:57:13.851488 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.19s 2026-03-23 00:57:13.851495 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.88s 2026-03-23 00:57:13.851502 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.62s 2026-03-23 00:57:13.851508 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.62s 2026-03-23 00:57:13.851515 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-03-23 00:57:13.851523 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-03-23 00:57:13.851530 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.37s 2026-03-23 00:57:13.851876 | orchestrator | 2026-03-23 00:57:13 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:13.853604 | orchestrator | 2026-03-23 00:57:13 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:13.855217 | orchestrator | 2026-03-23 00:57:13 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:13.856323 | orchestrator | 2026-03-23 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:16.898127 | orchestrator | 2026-03-23 00:57:16 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:16.900729 | orchestrator | 2026-03-23 00:57:16 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:16.902132 | orchestrator | 2026-03-23 00:57:16 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:16.902169 | orchestrator | 2026-03-23 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:19.937739 | orchestrator | 2026-03-23 00:57:19 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:19.939228 | orchestrator | 2026-03-23 00:57:19 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:19.941952 | orchestrator | 2026-03-23 00:57:19 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:19.942077 | orchestrator | 2026-03-23 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:22.978369 | orchestrator | 2026-03-23 00:57:22 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:22.982091 | orchestrator | 2026-03-23 00:57:22 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:22.982361 | orchestrator | 2026-03-23 00:57:22 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:22.982375 | orchestrator | 2026-03-23 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:26.020639 | orchestrator | 2026-03-23 00:57:26 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:26.020756 | orchestrator | 2026-03-23 00:57:26 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:26.022444 | orchestrator | 2026-03-23 00:57:26 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:26.022498 | orchestrator | 2026-03-23 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:29.075940 | orchestrator | 2026-03-23 00:57:29 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:29.076444 | orchestrator | 2026-03-23 00:57:29 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:29.078937 | orchestrator | 2026-03-23 00:57:29 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:29.078981 | orchestrator | 2026-03-23 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:32.114949 | orchestrator | 2026-03-23 00:57:32 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:32.115605 | orchestrator | 2026-03-23 00:57:32 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:32.116256 | orchestrator | 2026-03-23 00:57:32 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:32.116297 | orchestrator | 2026-03-23 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:35.164121 | orchestrator | 2026-03-23 00:57:35 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:35.173213 | orchestrator | 2026-03-23 00:57:35 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:35.173835 | orchestrator | 2026-03-23 00:57:35 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:35.173888 | orchestrator | 2026-03-23 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:38.214767 | orchestrator | 2026-03-23 00:57:38 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:38.216399 | orchestrator | 2026-03-23 00:57:38 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:38.218075 | orchestrator | 2026-03-23 00:57:38 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:38.218123 | orchestrator | 2026-03-23 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:41.281843 | orchestrator | 2026-03-23 00:57:41 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:41.283826 | orchestrator | 2026-03-23 00:57:41 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:41.287358 | orchestrator | 2026-03-23 00:57:41 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:41.287580 | orchestrator | 2026-03-23 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:44.326617 | orchestrator | 2026-03-23 00:57:44 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:44.328478 | orchestrator | 2026-03-23 00:57:44 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:44.329710 | orchestrator | 2026-03-23 00:57:44 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:44.329743 | orchestrator | 2026-03-23 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:47.365899 | orchestrator | 2026-03-23 00:57:47 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:47.367322 | orchestrator | 2026-03-23 00:57:47 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:47.369831 | orchestrator | 2026-03-23 00:57:47 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:47.369868 | orchestrator | 2026-03-23 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:50.411764 | orchestrator | 2026-03-23 00:57:50 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:50.413135 | orchestrator | 2026-03-23 00:57:50 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:50.413808 | orchestrator | 2026-03-23 00:57:50 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:50.413838 | orchestrator | 2026-03-23 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:53.463580 | orchestrator | 2026-03-23 00:57:53 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:53.465131 | orchestrator | 2026-03-23 00:57:53 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:53.467246 | orchestrator | 2026-03-23 00:57:53 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:53.467661 | orchestrator | 2026-03-23 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:56.510302 | orchestrator | 2026-03-23 00:57:56 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:56.512344 | orchestrator | 2026-03-23 00:57:56 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:56.514138 | orchestrator | 2026-03-23 00:57:56 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:56.514211 | orchestrator | 2026-03-23 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:57:59.559163 | orchestrator | 2026-03-23 00:57:59 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:57:59.561259 | orchestrator | 2026-03-23 00:57:59 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:57:59.561940 | orchestrator | 2026-03-23 00:57:59 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:57:59.561994 | orchestrator | 2026-03-23 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:02.607756 | orchestrator | 2026-03-23 00:58:02 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state STARTED 2026-03-23 00:58:02.609765 | orchestrator | 2026-03-23 00:58:02 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:02.610853 | orchestrator | 2026-03-23 00:58:02 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:58:02.610908 | orchestrator | 2026-03-23 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:05.652477 | orchestrator | 2026-03-23 00:58:05 | INFO  | Task bb1f6687-9cac-49ae-bedf-782eb74a1931 is in state SUCCESS 2026-03-23 00:58:05.653602 | orchestrator | 2026-03-23 00:58:05.653656 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-23 00:58:05.653663 | orchestrator | 2.16.14 2026-03-23 00:58:05.653667 | orchestrator | 2026-03-23 00:58:05.653671 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-23 00:58:05.653675 | orchestrator | 2026-03-23 00:58:05.653679 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-23 00:58:05.653684 | orchestrator | Monday 23 March 2026 00:56:12 +0000 (0:00:00.550) 0:00:00.550 ********** 2026-03-23 00:58:05.653687 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:58:05.653692 | orchestrator | 2026-03-23 00:58:05.653696 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-23 00:58:05.653699 | orchestrator | Monday 23 March 2026 00:56:12 +0000 (0:00:00.608) 0:00:01.158 ********** 2026-03-23 00:58:05.653703 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.653707 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.653711 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.653715 | orchestrator | 2026-03-23 00:58:05.653734 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-23 00:58:05.653738 | orchestrator | Monday 23 March 2026 00:56:13 +0000 (0:00:00.880) 0:00:02.038 ********** 2026-03-23 00:58:05.653742 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.653746 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.653749 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.653753 | orchestrator | 2026-03-23 00:58:05.653757 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-23 00:58:05.653761 | orchestrator | Monday 23 March 2026 00:56:13 +0000 (0:00:00.280) 0:00:02.318 ********** 2026-03-23 00:58:05.653765 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.653768 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.653772 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.653776 | orchestrator | 2026-03-23 00:58:05.653780 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-23 00:58:05.653796 | orchestrator | Monday 23 March 2026 00:56:14 +0000 (0:00:00.738) 0:00:03.057 ********** 2026-03-23 00:58:05.653801 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.653805 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.653808 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.653812 | orchestrator | 2026-03-23 00:58:05.653816 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-23 00:58:05.653820 | orchestrator | Monday 23 March 2026 00:56:14 +0000 (0:00:00.292) 0:00:03.350 ********** 2026-03-23 00:58:05.653824 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.653827 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.653831 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.653835 | orchestrator | 2026-03-23 00:58:05.653873 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-23 00:58:05.653880 | orchestrator | Monday 23 March 2026 00:56:15 +0000 (0:00:00.303) 0:00:03.654 ********** 2026-03-23 00:58:05.653886 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.653892 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.653898 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.653904 | orchestrator | 2026-03-23 00:58:05.653992 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-23 00:58:05.654040 | orchestrator | Monday 23 March 2026 00:56:15 +0000 (0:00:00.319) 0:00:03.973 ********** 2026-03-23 00:58:05.654045 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.654050 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.654538 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.654549 | orchestrator | 2026-03-23 00:58:05.654554 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-23 00:58:05.654560 | orchestrator | Monday 23 March 2026 00:56:15 +0000 (0:00:00.522) 0:00:04.496 ********** 2026-03-23 00:58:05.654564 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.654569 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.654574 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.654578 | orchestrator | 2026-03-23 00:58:05.654588 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-23 00:58:05.654592 | orchestrator | Monday 23 March 2026 00:56:16 +0000 (0:00:00.274) 0:00:04.771 ********** 2026-03-23 00:58:05.654596 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-23 00:58:05.654600 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-23 00:58:05.654604 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-23 00:58:05.654607 | orchestrator | 2026-03-23 00:58:05.654611 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-23 00:58:05.654615 | orchestrator | Monday 23 March 2026 00:56:16 +0000 (0:00:00.603) 0:00:05.374 ********** 2026-03-23 00:58:05.654619 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.654623 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.654626 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.654630 | orchestrator | 2026-03-23 00:58:05.654634 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-23 00:58:05.654638 | orchestrator | Monday 23 March 2026 00:56:17 +0000 (0:00:00.465) 0:00:05.839 ********** 2026-03-23 00:58:05.654641 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-23 00:58:05.654645 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-23 00:58:05.654649 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-23 00:58:05.654652 | orchestrator | 2026-03-23 00:58:05.654656 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-23 00:58:05.654660 | orchestrator | Monday 23 March 2026 00:56:21 +0000 (0:00:03.906) 0:00:09.746 ********** 2026-03-23 00:58:05.654664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-23 00:58:05.654669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-23 00:58:05.654676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-23 00:58:05.654682 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.654688 | orchestrator | 2026-03-23 00:58:05.654722 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-23 00:58:05.654730 | orchestrator | Monday 23 March 2026 00:56:21 +0000 (0:00:00.378) 0:00:10.124 ********** 2026-03-23 00:58:05.654737 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.654744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.654755 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.654762 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.654768 | orchestrator | 2026-03-23 00:58:05.654774 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-23 00:58:05.654781 | orchestrator | Monday 23 March 2026 00:56:22 +0000 (0:00:00.758) 0:00:10.882 ********** 2026-03-23 00:58:05.654794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.654802 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.654809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.654816 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.654822 | orchestrator | 2026-03-23 00:58:05.654829 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-23 00:58:05.654835 | orchestrator | Monday 23 March 2026 00:56:22 +0000 (0:00:00.148) 0:00:11.031 ********** 2026-03-23 00:58:05.654844 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '903466fcaa6b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-23 00:56:18.238791', 'end': '2026-03-23 00:56:18.267259', 'delta': '0:00:00.028468', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['903466fcaa6b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-23 00:58:05.654852 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6b38d13d71f1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-23 00:56:19.228363', 'end': '2026-03-23 00:56:20.260430', 'delta': '0:00:01.032067', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b38d13d71f1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-23 00:58:05.654882 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cf3db6eb1c0a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-23 00:56:21.058362', 'end': '2026-03-23 00:56:21.085858', 'delta': '0:00:00.027496', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf3db6eb1c0a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-23 00:58:05.654891 | orchestrator | 2026-03-23 00:58:05.654921 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-23 00:58:05.654930 | orchestrator | Monday 23 March 2026 00:56:22 +0000 (0:00:00.342) 0:00:11.373 ********** 2026-03-23 00:58:05.654936 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.654943 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.654950 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.654956 | orchestrator | 2026-03-23 00:58:05.654962 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-23 00:58:05.654968 | orchestrator | Monday 23 March 2026 00:56:23 +0000 (0:00:00.438) 0:00:11.811 ********** 2026-03-23 00:58:05.654974 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-23 00:58:05.654980 | orchestrator | 2026-03-23 00:58:05.654987 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-23 00:58:05.654993 | orchestrator | Monday 23 March 2026 00:56:24 +0000 (0:00:01.134) 0:00:12.946 ********** 2026-03-23 00:58:05.655000 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.655006 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.655013 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.655019 | orchestrator | 2026-03-23 00:58:05.655026 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-23 00:58:05.655033 | orchestrator | Monday 23 March 2026 00:56:24 +0000 (0:00:00.276) 0:00:13.223 ********** 2026-03-23 00:58:05.655038 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.655045 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.655049 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.655053 | orchestrator | 2026-03-23 00:58:05.655057 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-23 00:58:05.655073 | orchestrator | Monday 23 March 2026 00:56:25 +0000 (0:00:00.395) 0:00:13.618 ********** 2026-03-23 00:58:05.655077 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.655081 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.655085 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.655088 | orchestrator | 2026-03-23 00:58:05.655092 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-23 00:58:05.655096 | orchestrator | Monday 23 March 2026 00:56:25 +0000 (0:00:00.443) 0:00:14.061 ********** 2026-03-23 00:58:05.655100 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.655104 | orchestrator | 2026-03-23 00:58:05.655107 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-23 00:58:05.655111 | orchestrator | Monday 23 March 2026 00:56:25 +0000 (0:00:00.129) 0:00:14.190 ********** 2026-03-23 00:58:05.655115 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.655119 | orchestrator | 2026-03-23 00:58:05.655122 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-23 00:58:05.655126 | orchestrator | Monday 23 March 2026 00:56:25 +0000 (0:00:00.219) 0:00:14.410 ********** 2026-03-23 00:58:05.655130 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.655138 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.655142 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.655145 | orchestrator | 2026-03-23 00:58:05.655149 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-23 00:58:05.655153 | orchestrator | Monday 23 March 2026 00:56:26 +0000 (0:00:00.277) 0:00:14.687 ********** 2026-03-23 00:58:05.655157 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.655160 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.655164 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.655168 | orchestrator | 2026-03-23 00:58:05.655172 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-23 00:58:05.655175 | orchestrator | Monday 23 March 2026 00:56:26 +0000 (0:00:00.310) 0:00:14.997 ********** 2026-03-23 00:58:05.655179 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.655183 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.655187 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.655195 | orchestrator | 2026-03-23 00:58:05.655198 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-23 00:58:05.655202 | orchestrator | Monday 23 March 2026 00:56:26 +0000 (0:00:00.471) 0:00:15.469 ********** 2026-03-23 00:58:05.655206 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.655210 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.655214 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.655217 | orchestrator | 2026-03-23 00:58:05.655221 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-23 00:58:05.655225 | orchestrator | Monday 23 March 2026 00:56:27 +0000 (0:00:00.304) 0:00:15.773 ********** 2026-03-23 00:58:05.655229 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.655232 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.655236 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.655240 | orchestrator | 2026-03-23 00:58:05.655244 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-23 00:58:05.655247 | orchestrator | Monday 23 March 2026 00:56:27 +0000 (0:00:00.302) 0:00:16.076 ********** 2026-03-23 00:58:05.655251 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.655255 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.655259 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.655281 | orchestrator | 2026-03-23 00:58:05.655286 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-23 00:58:05.655290 | orchestrator | Monday 23 March 2026 00:56:27 +0000 (0:00:00.302) 0:00:16.379 ********** 2026-03-23 00:58:05.655294 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.655298 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.655301 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.655305 | orchestrator | 2026-03-23 00:58:05.655309 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-23 00:58:05.655313 | orchestrator | Monday 23 March 2026 00:56:28 +0000 (0:00:00.472) 0:00:16.851 ********** 2026-03-23 00:58:05.655320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e8fe5fb--1ce5--58e9--8668--0121db885e3a-osd--block--4e8fe5fb--1ce5--58e9--8668--0121db885e3a', 'dm-uuid-LVM-lMkBvxv10W02N8c4sobLQ0h29HKaWnFCR7cPhV5ZeYAO6LBG1U6Q8KuacaSm9W1D'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--64892dc7--40b9--50f4--a971--7ffdf1a56e40-osd--block--64892dc7--40b9--50f4--a971--7ffdf1a56e40', 'dm-uuid-LVM-kA6tF1EZr181HQ0V3skfDtYPJE1uMad9Sq3O4mjyCfcPDaJcjYKpbrcb0QmhBKlb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1bf36823--02d4--5086--a00f--5e3efdd328af-osd--block--1bf36823--02d4--5086--a00f--5e3efdd328af', 'dm-uuid-LVM-46WGyBqFiFffrkmN36ciuiQ5cckjL07GJJzRosi8GKlEOx76gBYFGnAqtBX1cxDm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4e8fe5fb--1ce5--58e9--8668--0121db885e3a-osd--block--4e8fe5fb--1ce5--58e9--8668--0121db885e3a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PKFKTr-pcsF-qj8g-I47G-ODeh-oqUN-pjqrkV', 'scsi-0QEMU_QEMU_HARDDISK_1d2a1acf-b303-4df2-8937-2ee8f9bbf12f', 'scsi-SQEMU_QEMU_HARDDISK_1d2a1acf-b303-4df2-8937-2ee8f9bbf12f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--92a7bb1e--121d--56dc--8fa7--94c9c65422a6-osd--block--92a7bb1e--121d--56dc--8fa7--94c9c65422a6', 'dm-uuid-LVM-pjMACQ4vEJDQ2evYfnAhlh3dKWsldOpt336bhYbGPyPWqVJE2N5AWnWzKl6KddjT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--64892dc7--40b9--50f4--a971--7ffdf1a56e40-osd--block--64892dc7--40b9--50f4--a971--7ffdf1a56e40'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5etLp2-aUyt-7xxq-o9eL-0H8i-eimR-6PPrxd', 'scsi-0QEMU_QEMU_HARDDISK_c3b20d12-9473-438c-9aa2-c72737b9e6d0', 'scsi-SQEMU_QEMU_HARDDISK_c3b20d12-9473-438c-9aa2-c72737b9e6d0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d03a194-715d-49d1-b802-c824960a80c4', 'scsi-SQEMU_QEMU_HARDDISK_6d03a194-715d-49d1-b802-c824960a80c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655500 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.655509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part1', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part14', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part15', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part16', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1bf36823--02d4--5086--a00f--5e3efdd328af-osd--block--1bf36823--02d4--5086--a00f--5e3efdd328af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kEE7Ck-8cxh-3YgF-isQE-C5eu-xXzy-3tTWUP', 'scsi-0QEMU_QEMU_HARDDISK_77dd2124-92bc-4f46-82be-f9b228a0677e', 'scsi-SQEMU_QEMU_HARDDISK_77dd2124-92bc-4f46-82be-f9b228a0677e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--92a7bb1e--121d--56dc--8fa7--94c9c65422a6-osd--block--92a7bb1e--121d--56dc--8fa7--94c9c65422a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vK0CLZ-Zkn8-NYp8-uCt5-hT6I-IUy5-Sf42U6', 'scsi-0QEMU_QEMU_HARDDISK_0331d52b-cef6-4339-b12c-c63469d626c6', 'scsi-SQEMU_QEMU_HARDDISK_0331d52b-cef6-4339-b12c-c63469d626c6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5', 'scsi-SQEMU_QEMU_HARDDISK_56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655581 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.655587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b7e7e409--387b--5e35--af60--96efea6ce8aa-osd--block--b7e7e409--387b--5e35--af60--96efea6ce8aa', 'dm-uuid-LVM-HrrdHKVvlffigjb21JUaHBk7nln1BlPkaHRqnZG62YT1PnrapsdzAe9Rck9gjuMK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6fa6fe99--be0d--55bf--a5b2--66c7db596be7-osd--block--6fa6fe99--be0d--55bf--a5b2--66c7db596be7', 'dm-uuid-LVM-1HDuY7LP7KT9iCr7bqCcrJ45J4jOmY5I09TE9ct2aroQWcsilZzsrpqQJmwrazJB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-23 00:58:05.655667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part1', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part14', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part15', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part16', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b7e7e409--387b--5e35--af60--96efea6ce8aa-osd--block--b7e7e409--387b--5e35--af60--96efea6ce8aa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jnm62G-v7Cy-4iJo-dTjS-LtgQ-XTBq-aem6vq', 'scsi-0QEMU_QEMU_HARDDISK_59b4a83f-d9c4-4d19-8941-518108c7531d', 'scsi-SQEMU_QEMU_HARDDISK_59b4a83f-d9c4-4d19-8941-518108c7531d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6fa6fe99--be0d--55bf--a5b2--66c7db596be7-osd--block--6fa6fe99--be0d--55bf--a5b2--66c7db596be7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oMXOqN-2up1-mMzD-oyo3-glzr-0BZQ-HD6hJ7', 'scsi-0QEMU_QEMU_HARDDISK_ff498ee2-e745-4049-bce7-87b4610f4b76', 'scsi-SQEMU_QEMU_HARDDISK_ff498ee2-e745-4049-bce7-87b4610f4b76'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6dc9e4a-bb14-4275-87ca-e10d4388766d', 'scsi-SQEMU_QEMU_HARDDISK_a6dc9e4a-bb14-4275-87ca-e10d4388766d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-23 00:58:05.655709 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.655716 | orchestrator | 2026-03-23 00:58:05.655785 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-23 00:58:05.655792 | orchestrator | Monday 23 March 2026 00:56:28 +0000 (0:00:00.572) 0:00:17.424 ********** 2026-03-23 00:58:05.655802 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e8fe5fb--1ce5--58e9--8668--0121db885e3a-osd--block--4e8fe5fb--1ce5--58e9--8668--0121db885e3a', 'dm-uuid-LVM-lMkBvxv10W02N8c4sobLQ0h29HKaWnFCR7cPhV5ZeYAO6LBG1U6Q8KuacaSm9W1D'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655815 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--64892dc7--40b9--50f4--a971--7ffdf1a56e40-osd--block--64892dc7--40b9--50f4--a971--7ffdf1a56e40', 'dm-uuid-LVM-kA6tF1EZr181HQ0V3skfDtYPJE1uMad9Sq3O4mjyCfcPDaJcjYKpbrcb0QmhBKlb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655822 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655836 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655848 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655861 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655871 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655878 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1bf36823--02d4--5086--a00f--5e3efdd328af-osd--block--1bf36823--02d4--5086--a00f--5e3efdd328af', 'dm-uuid-LVM-46WGyBqFiFffrkmN36ciuiQ5cckjL07GJJzRosi8GKlEOx76gBYFGnAqtBX1cxDm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655890 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--92a7bb1e--121d--56dc--8fa7--94c9c65422a6-osd--block--92a7bb1e--121d--56dc--8fa7--94c9c65422a6', 'dm-uuid-LVM-pjMACQ4vEJDQ2evYfnAhlh3dKWsldOpt336bhYbGPyPWqVJE2N5AWnWzKl6KddjT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655901 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655923 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655935 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part1', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part14', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part15', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part16', 'scsi-SQEMU_QEMU_HARDDISK_ab4bd864-28a4-4976-ae20-c7c9f16ccd15-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655941 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655949 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655956 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4e8fe5fb--1ce5--58e9--8668--0121db885e3a-osd--block--4e8fe5fb--1ce5--58e9--8668--0121db885e3a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PKFKTr-pcsF-qj8g-I47G-ODeh-oqUN-pjqrkV', 'scsi-0QEMU_QEMU_HARDDISK_1d2a1acf-b303-4df2-8937-2ee8f9bbf12f', 'scsi-SQEMU_QEMU_HARDDISK_1d2a1acf-b303-4df2-8937-2ee8f9bbf12f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655963 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--64892dc7--40b9--50f4--a971--7ffdf1a56e40-osd--block--64892dc7--40b9--50f4--a971--7ffdf1a56e40'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5etLp2-aUyt-7xxq-o9eL-0H8i-eimR-6PPrxd', 'scsi-0QEMU_QEMU_HARDDISK_c3b20d12-9473-438c-9aa2-c72737b9e6d0', 'scsi-SQEMU_QEMU_HARDDISK_c3b20d12-9473-438c-9aa2-c72737b9e6d0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655967 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655971 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655977 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d03a194-715d-49d1-b802-c824960a80c4', 'scsi-SQEMU_QEMU_HARDDISK_6d03a194-715d-49d1-b802-c824960a80c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655983 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655990 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655994 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.655998 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.656002 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656011 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part1', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part14', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part15', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part16', 'scsi-SQEMU_QEMU_HARDDISK_29411df8-7097-419c-8410-7d3b9e1926ff-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656019 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1bf36823--02d4--5086--a00f--5e3efdd328af-osd--block--1bf36823--02d4--5086--a00f--5e3efdd328af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kEE7Ck-8cxh-3YgF-isQE-C5eu-xXzy-3tTWUP', 'scsi-0QEMU_QEMU_HARDDISK_77dd2124-92bc-4f46-82be-f9b228a0677e', 'scsi-SQEMU_QEMU_HARDDISK_77dd2124-92bc-4f46-82be-f9b228a0677e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b7e7e409--387b--5e35--af60--96efea6ce8aa-osd--block--b7e7e409--387b--5e35--af60--96efea6ce8aa', 'dm-uuid-LVM-HrrdHKVvlffigjb21JUaHBk7nln1BlPkaHRqnZG62YT1PnrapsdzAe9Rck9gjuMK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656027 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--92a7bb1e--121d--56dc--8fa7--94c9c65422a6-osd--block--92a7bb1e--121d--56dc--8fa7--94c9c65422a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vK0CLZ-Zkn8-NYp8-uCt5-hT6I-IUy5-Sf42U6', 'scsi-0QEMU_QEMU_HARDDISK_0331d52b-cef6-4339-b12c-c63469d626c6', 'scsi-SQEMU_QEMU_HARDDISK_0331d52b-cef6-4339-b12c-c63469d626c6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656034 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6fa6fe99--be0d--55bf--a5b2--66c7db596be7-osd--block--6fa6fe99--be0d--55bf--a5b2--66c7db596be7', 'dm-uuid-LVM-1HDuY7LP7KT9iCr7bqCcrJ45J4jOmY5I09TE9ct2aroQWcsilZzsrpqQJmwrazJB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656042 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5', 'scsi-SQEMU_QEMU_HARDDISK_56c11f4c-1dc5-4b86-8fcf-019d2bf6f6e5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656046 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656050 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656054 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.656058 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656062 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656070 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656082 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656086 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656090 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656099 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part1', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part14', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part15', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part16', 'scsi-SQEMU_QEMU_HARDDISK_53d97a78-52aa-4b6a-8314-cc73eaae2f37-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656107 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b7e7e409--387b--5e35--af60--96efea6ce8aa-osd--block--b7e7e409--387b--5e35--af60--96efea6ce8aa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jnm62G-v7Cy-4iJo-dTjS-LtgQ-XTBq-aem6vq', 'scsi-0QEMU_QEMU_HARDDISK_59b4a83f-d9c4-4d19-8941-518108c7531d', 'scsi-SQEMU_QEMU_HARDDISK_59b4a83f-d9c4-4d19-8941-518108c7531d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656111 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6fa6fe99--be0d--55bf--a5b2--66c7db596be7-osd--block--6fa6fe99--be0d--55bf--a5b2--66c7db596be7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oMXOqN-2up1-mMzD-oyo3-glzr-0BZQ-HD6hJ7', 'scsi-0QEMU_QEMU_HARDDISK_ff498ee2-e745-4049-bce7-87b4610f4b76', 'scsi-SQEMU_QEMU_HARDDISK_ff498ee2-e745-4049-bce7-87b4610f4b76'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656115 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6dc9e4a-bb14-4275-87ca-e10d4388766d', 'scsi-SQEMU_QEMU_HARDDISK_a6dc9e4a-bb14-4275-87ca-e10d4388766d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656121 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-23-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-23 00:58:05.656129 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.656133 | orchestrator | 2026-03-23 00:58:05.656137 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-23 00:58:05.656141 | orchestrator | Monday 23 March 2026 00:56:29 +0000 (0:00:00.591) 0:00:18.016 ********** 2026-03-23 00:58:05.656146 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.656152 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.656158 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.656164 | orchestrator | 2026-03-23 00:58:05.656170 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-23 00:58:05.656179 | orchestrator | Monday 23 March 2026 00:56:30 +0000 (0:00:00.749) 0:00:18.765 ********** 2026-03-23 00:58:05.656185 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.656191 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.656197 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.656203 | orchestrator | 2026-03-23 00:58:05.656208 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-23 00:58:05.656215 | orchestrator | Monday 23 March 2026 00:56:30 +0000 (0:00:00.460) 0:00:19.226 ********** 2026-03-23 00:58:05.656221 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.656227 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.656233 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.656239 | orchestrator | 2026-03-23 00:58:05.656245 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-23 00:58:05.656251 | orchestrator | Monday 23 March 2026 00:56:31 +0000 (0:00:00.701) 0:00:19.927 ********** 2026-03-23 00:58:05.656257 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.656263 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.656269 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.656276 | orchestrator | 2026-03-23 00:58:05.656281 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-23 00:58:05.656287 | orchestrator | Monday 23 March 2026 00:56:31 +0000 (0:00:00.279) 0:00:20.207 ********** 2026-03-23 00:58:05.656293 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.656299 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.656305 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.656311 | orchestrator | 2026-03-23 00:58:05.656317 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-23 00:58:05.656323 | orchestrator | Monday 23 March 2026 00:56:32 +0000 (0:00:00.378) 0:00:20.585 ********** 2026-03-23 00:58:05.656329 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.656335 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.656341 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.656347 | orchestrator | 2026-03-23 00:58:05.656352 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-23 00:58:05.656358 | orchestrator | Monday 23 March 2026 00:56:32 +0000 (0:00:00.493) 0:00:21.079 ********** 2026-03-23 00:58:05.656364 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-23 00:58:05.656370 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-23 00:58:05.656377 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-23 00:58:05.656384 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-23 00:58:05.656390 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-23 00:58:05.656402 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-23 00:58:05.656409 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-23 00:58:05.656415 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-23 00:58:05.656422 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-23 00:58:05.656429 | orchestrator | 2026-03-23 00:58:05.656437 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-23 00:58:05.656444 | orchestrator | Monday 23 March 2026 00:56:33 +0000 (0:00:00.852) 0:00:21.932 ********** 2026-03-23 00:58:05.656451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-23 00:58:05.656458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-23 00:58:05.656464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-23 00:58:05.656471 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.656478 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-23 00:58:05.656484 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-23 00:58:05.656491 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-23 00:58:05.656498 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.656505 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-23 00:58:05.656512 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-23 00:58:05.656519 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-23 00:58:05.656526 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.656533 | orchestrator | 2026-03-23 00:58:05.656539 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-23 00:58:05.656545 | orchestrator | Monday 23 March 2026 00:56:33 +0000 (0:00:00.329) 0:00:22.262 ********** 2026-03-23 00:58:05.656552 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 00:58:05.656558 | orchestrator | 2026-03-23 00:58:05.656564 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-23 00:58:05.656571 | orchestrator | Monday 23 March 2026 00:56:34 +0000 (0:00:00.666) 0:00:22.928 ********** 2026-03-23 00:58:05.656584 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.656591 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.656597 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.656604 | orchestrator | 2026-03-23 00:58:05.656610 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-23 00:58:05.656616 | orchestrator | Monday 23 March 2026 00:56:34 +0000 (0:00:00.301) 0:00:23.230 ********** 2026-03-23 00:58:05.656622 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.656628 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.656635 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.656640 | orchestrator | 2026-03-23 00:58:05.656647 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-23 00:58:05.656653 | orchestrator | Monday 23 March 2026 00:56:35 +0000 (0:00:00.313) 0:00:23.543 ********** 2026-03-23 00:58:05.656673 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.656679 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.656685 | orchestrator | skipping: [testbed-node-5] 2026-03-23 00:58:05.656691 | orchestrator | 2026-03-23 00:58:05.656697 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-23 00:58:05.656704 | orchestrator | Monday 23 March 2026 00:56:35 +0000 (0:00:00.309) 0:00:23.853 ********** 2026-03-23 00:58:05.656785 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.656792 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.656798 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.656804 | orchestrator | 2026-03-23 00:58:05.656810 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-23 00:58:05.656816 | orchestrator | Monday 23 March 2026 00:56:35 +0000 (0:00:00.589) 0:00:24.442 ********** 2026-03-23 00:58:05.656828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:58:05.656835 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:58:05.656841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:58:05.656847 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.656853 | orchestrator | 2026-03-23 00:58:05.656860 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-23 00:58:05.656866 | orchestrator | Monday 23 March 2026 00:56:36 +0000 (0:00:00.356) 0:00:24.799 ********** 2026-03-23 00:58:05.656872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:58:05.656879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:58:05.656883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:58:05.656887 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.656891 | orchestrator | 2026-03-23 00:58:05.656895 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-23 00:58:05.656898 | orchestrator | Monday 23 March 2026 00:56:36 +0000 (0:00:00.357) 0:00:25.156 ********** 2026-03-23 00:58:05.656902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-23 00:58:05.656934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-23 00:58:05.656939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-23 00:58:05.656942 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.656946 | orchestrator | 2026-03-23 00:58:05.656950 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-23 00:58:05.656954 | orchestrator | Monday 23 March 2026 00:56:36 +0000 (0:00:00.352) 0:00:25.508 ********** 2026-03-23 00:58:05.656957 | orchestrator | ok: [testbed-node-3] 2026-03-23 00:58:05.656961 | orchestrator | ok: [testbed-node-4] 2026-03-23 00:58:05.656965 | orchestrator | ok: [testbed-node-5] 2026-03-23 00:58:05.656969 | orchestrator | 2026-03-23 00:58:05.656973 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-23 00:58:05.656976 | orchestrator | Monday 23 March 2026 00:56:37 +0000 (0:00:00.298) 0:00:25.807 ********** 2026-03-23 00:58:05.656980 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-23 00:58:05.656984 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-23 00:58:05.656987 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-23 00:58:05.656991 | orchestrator | 2026-03-23 00:58:05.656995 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-23 00:58:05.656999 | orchestrator | Monday 23 March 2026 00:56:37 +0000 (0:00:00.471) 0:00:26.279 ********** 2026-03-23 00:58:05.657002 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-23 00:58:05.657006 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-23 00:58:05.657010 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-23 00:58:05.657014 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-23 00:58:05.657018 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-23 00:58:05.657021 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-23 00:58:05.657025 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-23 00:58:05.657029 | orchestrator | 2026-03-23 00:58:05.657033 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-23 00:58:05.657036 | orchestrator | Monday 23 March 2026 00:56:38 +0000 (0:00:00.975) 0:00:27.254 ********** 2026-03-23 00:58:05.657040 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-23 00:58:05.657044 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-23 00:58:05.657048 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-23 00:58:05.657055 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-23 00:58:05.657058 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-23 00:58:05.657062 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-23 00:58:05.657071 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-23 00:58:05.657074 | orchestrator | 2026-03-23 00:58:05.657078 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-23 00:58:05.657082 | orchestrator | Monday 23 March 2026 00:56:40 +0000 (0:00:01.908) 0:00:29.162 ********** 2026-03-23 00:58:05.657086 | orchestrator | skipping: [testbed-node-3] 2026-03-23 00:58:05.657089 | orchestrator | skipping: [testbed-node-4] 2026-03-23 00:58:05.657093 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-23 00:58:05.657097 | orchestrator | 2026-03-23 00:58:05.657101 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-23 00:58:05.657104 | orchestrator | Monday 23 March 2026 00:56:40 +0000 (0:00:00.362) 0:00:29.525 ********** 2026-03-23 00:58:05.657112 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-23 00:58:05.657117 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-23 00:58:05.657121 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-23 00:58:05.657124 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-23 00:58:05.657128 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-23 00:58:05.657132 | orchestrator | 2026-03-23 00:58:05.657136 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-23 00:58:05.657140 | orchestrator | Monday 23 March 2026 00:57:16 +0000 (0:00:35.887) 0:01:05.413 ********** 2026-03-23 00:58:05.657144 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657148 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657151 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657155 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657159 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657162 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657166 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-23 00:58:05.657170 | orchestrator | 2026-03-23 00:58:05.657174 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-23 00:58:05.657177 | orchestrator | Monday 23 March 2026 00:57:35 +0000 (0:00:18.525) 0:01:23.938 ********** 2026-03-23 00:58:05.657184 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657187 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657191 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657195 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657199 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657202 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657206 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-23 00:58:05.657210 | orchestrator | 2026-03-23 00:58:05.657213 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-23 00:58:05.657217 | orchestrator | Monday 23 March 2026 00:57:45 +0000 (0:00:09.710) 0:01:33.648 ********** 2026-03-23 00:58:05.657221 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657225 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-23 00:58:05.657228 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-23 00:58:05.657232 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657236 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-23 00:58:05.657242 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-23 00:58:05.657246 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657249 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-23 00:58:05.657253 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-23 00:58:05.657257 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657261 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-23 00:58:05.657265 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-23 00:58:05.657268 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657272 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-23 00:58:05.657278 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-23 00:58:05.657281 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-23 00:58:05.657285 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-23 00:58:05.657289 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-23 00:58:05.657293 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-23 00:58:05.657296 | orchestrator | 2026-03-23 00:58:05.657300 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:58:05.657304 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-23 00:58:05.657308 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-23 00:58:05.657312 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-23 00:58:05.657316 | orchestrator | 2026-03-23 00:58:05.657320 | orchestrator | 2026-03-23 00:58:05.657324 | orchestrator | 2026-03-23 00:58:05.657327 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:58:05.657334 | orchestrator | Monday 23 March 2026 00:58:03 +0000 (0:00:18.548) 0:01:52.196 ********** 2026-03-23 00:58:05.657338 | orchestrator | =============================================================================== 2026-03-23 00:58:05.657341 | orchestrator | create openstack pool(s) ----------------------------------------------- 35.89s 2026-03-23 00:58:05.657345 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.55s 2026-03-23 00:58:05.657349 | orchestrator | generate keys ---------------------------------------------------------- 18.53s 2026-03-23 00:58:05.657352 | orchestrator | get keys from monitors -------------------------------------------------- 9.71s 2026-03-23 00:58:05.657362 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.91s 2026-03-23 00:58:05.657366 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.91s 2026-03-23 00:58:05.657374 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.13s 2026-03-23 00:58:05.657378 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.98s 2026-03-23 00:58:05.657381 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.88s 2026-03-23 00:58:05.657385 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2026-03-23 00:58:05.657389 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.76s 2026-03-23 00:58:05.657393 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.75s 2026-03-23 00:58:05.657396 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.74s 2026-03-23 00:58:05.657400 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.70s 2026-03-23 00:58:05.657404 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.67s 2026-03-23 00:58:05.657407 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.61s 2026-03-23 00:58:05.657411 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.60s 2026-03-23 00:58:05.657415 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.59s 2026-03-23 00:58:05.657419 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.59s 2026-03-23 00:58:05.657422 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.57s 2026-03-23 00:58:05.657427 | orchestrator | 2026-03-23 00:58:05 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:05.657431 | orchestrator | 2026-03-23 00:58:05 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:58:05.657436 | orchestrator | 2026-03-23 00:58:05 | INFO  | Task 7dc61cf6-f90d-4406-9883-a7deeed609c6 is in state STARTED 2026-03-23 00:58:05.657440 | orchestrator | 2026-03-23 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:08.699975 | orchestrator | 2026-03-23 00:58:08 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:08.702766 | orchestrator | 2026-03-23 00:58:08 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:58:08.703920 | orchestrator | 2026-03-23 00:58:08 | INFO  | Task 7dc61cf6-f90d-4406-9883-a7deeed609c6 is in state STARTED 2026-03-23 00:58:08.704086 | orchestrator | 2026-03-23 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:11.748459 | orchestrator | 2026-03-23 00:58:11 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:11.750089 | orchestrator | 2026-03-23 00:58:11 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:58:11.751715 | orchestrator | 2026-03-23 00:58:11 | INFO  | Task 7dc61cf6-f90d-4406-9883-a7deeed609c6 is in state STARTED 2026-03-23 00:58:11.751787 | orchestrator | 2026-03-23 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:14.789204 | orchestrator | 2026-03-23 00:58:14 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:14.791494 | orchestrator | 2026-03-23 00:58:14 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:58:14.793685 | orchestrator | 2026-03-23 00:58:14 | INFO  | Task 7dc61cf6-f90d-4406-9883-a7deeed609c6 is in state STARTED 2026-03-23 00:58:14.793742 | orchestrator | 2026-03-23 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:17.844414 | orchestrator | 2026-03-23 00:58:17 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:17.846872 | orchestrator | 2026-03-23 00:58:17 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:58:17.849481 | orchestrator | 2026-03-23 00:58:17 | INFO  | Task 7dc61cf6-f90d-4406-9883-a7deeed609c6 is in state STARTED 2026-03-23 00:58:17.849529 | orchestrator | 2026-03-23 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:20.902962 | orchestrator | 2026-03-23 00:58:20 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:20.905027 | orchestrator | 2026-03-23 00:58:20 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:58:20.907726 | orchestrator | 2026-03-23 00:58:20 | INFO  | Task 7dc61cf6-f90d-4406-9883-a7deeed609c6 is in state STARTED 2026-03-23 00:58:20.908022 | orchestrator | 2026-03-23 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:23.950091 | orchestrator | 2026-03-23 00:58:23 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:23.952220 | orchestrator | 2026-03-23 00:58:23 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:58:23.953168 | orchestrator | 2026-03-23 00:58:23 | INFO  | Task 7dc61cf6-f90d-4406-9883-a7deeed609c6 is in state STARTED 2026-03-23 00:58:23.953206 | orchestrator | 2026-03-23 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:26.996057 | orchestrator | 2026-03-23 00:58:26 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:26.998008 | orchestrator | 2026-03-23 00:58:26 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:58:27.000392 | orchestrator | 2026-03-23 00:58:26 | INFO  | Task 7dc61cf6-f90d-4406-9883-a7deeed609c6 is in state STARTED 2026-03-23 00:58:27.000737 | orchestrator | 2026-03-23 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:30.052231 | orchestrator | 2026-03-23 00:58:30 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:30.056686 | orchestrator | 2026-03-23 00:58:30 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:58:30.057189 | orchestrator | 2026-03-23 00:58:30 | INFO  | Task 7dc61cf6-f90d-4406-9883-a7deeed609c6 is in state STARTED 2026-03-23 00:58:30.057238 | orchestrator | 2026-03-23 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:33.101358 | orchestrator | 2026-03-23 00:58:33 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:33.103480 | orchestrator | 2026-03-23 00:58:33 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:58:33.105278 | orchestrator | 2026-03-23 00:58:33 | INFO  | Task 7dc61cf6-f90d-4406-9883-a7deeed609c6 is in state STARTED 2026-03-23 00:58:33.105304 | orchestrator | 2026-03-23 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:36.155734 | orchestrator | 2026-03-23 00:58:36 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:36.158412 | orchestrator | 2026-03-23 00:58:36 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:58:36.162825 | orchestrator | 2026-03-23 00:58:36 | INFO  | Task 7dc61cf6-f90d-4406-9883-a7deeed609c6 is in state STARTED 2026-03-23 00:58:36.162907 | orchestrator | 2026-03-23 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:39.216188 | orchestrator | 2026-03-23 00:58:39 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:39.217090 | orchestrator | 2026-03-23 00:58:39 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state STARTED 2026-03-23 00:58:39.221622 | orchestrator | 2026-03-23 00:58:39 | INFO  | Task 7dc61cf6-f90d-4406-9883-a7deeed609c6 is in state STARTED 2026-03-23 00:58:39.221715 | orchestrator | 2026-03-23 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:42.263428 | orchestrator | 2026-03-23 00:58:42 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:42.266819 | orchestrator | 2026-03-23 00:58:42 | INFO  | Task 7fc6a36a-5914-4783-83c6-d31bfe3f0bf1 is in state SUCCESS 2026-03-23 00:58:42.268372 | orchestrator | 2026-03-23 00:58:42.268418 | orchestrator | 2026-03-23 00:58:42.268427 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 00:58:42.268435 | orchestrator | 2026-03-23 00:58:42.268441 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 00:58:42.268448 | orchestrator | Monday 23 March 2026 00:57:09 +0000 (0:00:00.334) 0:00:00.334 ********** 2026-03-23 00:58:42.268454 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:58:42.268462 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:58:42.268468 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:58:42.268475 | orchestrator | 2026-03-23 00:58:42.268480 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 00:58:42.268484 | orchestrator | Monday 23 March 2026 00:57:09 +0000 (0:00:00.271) 0:00:00.605 ********** 2026-03-23 00:58:42.268488 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-23 00:58:42.268493 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-23 00:58:42.268497 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-23 00:58:42.268501 | orchestrator | 2026-03-23 00:58:42.268505 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-23 00:58:42.268508 | orchestrator | 2026-03-23 00:58:42.268512 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-23 00:58:42.268516 | orchestrator | Monday 23 March 2026 00:57:09 +0000 (0:00:00.280) 0:00:00.886 ********** 2026-03-23 00:58:42.268520 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:58:42.268525 | orchestrator | 2026-03-23 00:58:42.268528 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-23 00:58:42.268532 | orchestrator | Monday 23 March 2026 00:57:10 +0000 (0:00:00.498) 0:00:01.385 ********** 2026-03-23 00:58:42.268541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-23 00:58:42.268588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-23 00:58:42.268594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-23 00:58:42.268603 | orchestrator | 2026-03-23 00:58:42.268607 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-23 00:58:42.268610 | orchestrator | Monday 23 March 2026 00:57:12 +0000 (0:00:01.979) 0:00:03.365 ********** 2026-03-23 00:58:42.268614 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:58:42.268618 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:58:42.268624 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:58:42.268628 | orchestrator | 2026-03-23 00:58:42.268632 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-23 00:58:42.268636 | orchestrator | Monday 23 March 2026 00:57:12 +0000 (0:00:00.289) 0:00:03.654 ********** 2026-03-23 00:58:42.268639 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-23 00:58:42.268646 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-23 00:58:42.268651 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-23 00:58:42.268654 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-23 00:58:42.268658 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-23 00:58:42.268662 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-23 00:58:42.268666 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-23 00:58:42.268670 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-23 00:58:42.268673 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-23 00:58:42.268677 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-23 00:58:42.268681 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-23 00:58:42.268684 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-23 00:58:42.268688 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-23 00:58:42.268692 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-23 00:58:42.268695 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-23 00:58:42.268699 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-23 00:58:42.268706 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-23 00:58:42.268710 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-23 00:58:42.268714 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-23 00:58:42.268717 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-23 00:58:42.268721 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-23 00:58:42.268725 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-23 00:58:42.268728 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-23 00:58:42.268732 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-23 00:58:42.268737 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-23 00:58:42.268743 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-23 00:58:42.268747 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-23 00:58:42.268750 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-23 00:58:42.268754 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-23 00:58:42.268758 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-23 00:58:42.268762 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-23 00:58:42.268765 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-23 00:58:42.268769 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-23 00:58:42.268774 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-23 00:58:42.268778 | orchestrator | 2026-03-23 00:58:42.268782 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-23 00:58:42.268786 | orchestrator | Monday 23 March 2026 00:57:13 +0000 (0:00:00.657) 0:00:04.311 ********** 2026-03-23 00:58:42.268790 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:58:42.268796 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:58:42.268800 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:58:42.268804 | orchestrator | 2026-03-23 00:58:42.268808 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-23 00:58:42.268811 | orchestrator | Monday 23 March 2026 00:57:13 +0000 (0:00:00.417) 0:00:04.729 ********** 2026-03-23 00:58:42.268815 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.268819 | orchestrator | 2026-03-23 00:58:42.268825 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-23 00:58:42.268829 | orchestrator | Monday 23 March 2026 00:57:13 +0000 (0:00:00.117) 0:00:04.846 ********** 2026-03-23 00:58:42.268833 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.268837 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.268844 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.268848 | orchestrator | 2026-03-23 00:58:42.268909 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-23 00:58:42.268913 | orchestrator | Monday 23 March 2026 00:57:14 +0000 (0:00:00.256) 0:00:05.102 ********** 2026-03-23 00:58:42.268917 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:58:42.268921 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:58:42.268924 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:58:42.268928 | orchestrator | 2026-03-23 00:58:42.268932 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-23 00:58:42.268935 | orchestrator | Monday 23 March 2026 00:57:14 +0000 (0:00:00.285) 0:00:05.388 ********** 2026-03-23 00:58:42.268939 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.268943 | orchestrator | 2026-03-23 00:58:42.268947 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-23 00:58:42.268951 | orchestrator | Monday 23 March 2026 00:57:14 +0000 (0:00:00.135) 0:00:05.523 ********** 2026-03-23 00:58:42.268956 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.268960 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.268964 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.268968 | orchestrator | 2026-03-23 00:58:42.268972 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-23 00:58:42.268979 | orchestrator | Monday 23 March 2026 00:57:14 +0000 (0:00:00.439) 0:00:05.963 ********** 2026-03-23 00:58:42.268985 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:58:42.268995 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:58:42.269004 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:58:42.269009 | orchestrator | 2026-03-23 00:58:42.269015 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-23 00:58:42.269021 | orchestrator | Monday 23 March 2026 00:57:15 +0000 (0:00:00.338) 0:00:06.301 ********** 2026-03-23 00:58:42.269027 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269033 | orchestrator | 2026-03-23 00:58:42.269039 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-23 00:58:42.269046 | orchestrator | Monday 23 March 2026 00:57:15 +0000 (0:00:00.127) 0:00:06.429 ********** 2026-03-23 00:58:42.269052 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269058 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.269064 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.269069 | orchestrator | 2026-03-23 00:58:42.269075 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-23 00:58:42.269082 | orchestrator | Monday 23 March 2026 00:57:15 +0000 (0:00:00.270) 0:00:06.700 ********** 2026-03-23 00:58:42.269088 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:58:42.269095 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:58:42.269100 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:58:42.269107 | orchestrator | 2026-03-23 00:58:42.269113 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-23 00:58:42.269119 | orchestrator | Monday 23 March 2026 00:57:15 +0000 (0:00:00.293) 0:00:06.993 ********** 2026-03-23 00:58:42.269126 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269133 | orchestrator | 2026-03-23 00:58:42.269140 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-23 00:58:42.269146 | orchestrator | Monday 23 March 2026 00:57:16 +0000 (0:00:00.121) 0:00:07.115 ********** 2026-03-23 00:58:42.269153 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269160 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.269167 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.269173 | orchestrator | 2026-03-23 00:58:42.269181 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-23 00:58:42.269185 | orchestrator | Monday 23 March 2026 00:57:16 +0000 (0:00:00.418) 0:00:07.533 ********** 2026-03-23 00:58:42.269190 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:58:42.269194 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:58:42.269198 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:58:42.269207 | orchestrator | 2026-03-23 00:58:42.269212 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-23 00:58:42.269216 | orchestrator | Monday 23 March 2026 00:57:16 +0000 (0:00:00.278) 0:00:07.812 ********** 2026-03-23 00:58:42.269220 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269224 | orchestrator | 2026-03-23 00:58:42.269229 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-23 00:58:42.269233 | orchestrator | Monday 23 March 2026 00:57:16 +0000 (0:00:00.125) 0:00:07.938 ********** 2026-03-23 00:58:42.269237 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269242 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.269246 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.269251 | orchestrator | 2026-03-23 00:58:42.269257 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-23 00:58:42.269263 | orchestrator | Monday 23 March 2026 00:57:17 +0000 (0:00:00.275) 0:00:08.213 ********** 2026-03-23 00:58:42.269269 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:58:42.269275 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:58:42.269281 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:58:42.269288 | orchestrator | 2026-03-23 00:58:42.269295 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-23 00:58:42.269301 | orchestrator | Monday 23 March 2026 00:57:17 +0000 (0:00:00.293) 0:00:08.507 ********** 2026-03-23 00:58:42.269308 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269314 | orchestrator | 2026-03-23 00:58:42.269321 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-23 00:58:42.269328 | orchestrator | Monday 23 March 2026 00:57:17 +0000 (0:00:00.299) 0:00:08.806 ********** 2026-03-23 00:58:42.269334 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269341 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.269347 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.269353 | orchestrator | 2026-03-23 00:58:42.269359 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-23 00:58:42.269373 | orchestrator | Monday 23 March 2026 00:57:18 +0000 (0:00:00.306) 0:00:09.113 ********** 2026-03-23 00:58:42.269383 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:58:42.269388 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:58:42.269394 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:58:42.269399 | orchestrator | 2026-03-23 00:58:42.269405 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-23 00:58:42.269411 | orchestrator | Monday 23 March 2026 00:57:18 +0000 (0:00:00.288) 0:00:09.401 ********** 2026-03-23 00:58:42.269416 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269422 | orchestrator | 2026-03-23 00:58:42.269428 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-23 00:58:42.269433 | orchestrator | Monday 23 March 2026 00:57:18 +0000 (0:00:00.116) 0:00:09.518 ********** 2026-03-23 00:58:42.269439 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269445 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.269450 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.269456 | orchestrator | 2026-03-23 00:58:42.269462 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-23 00:58:42.269469 | orchestrator | Monday 23 March 2026 00:57:18 +0000 (0:00:00.290) 0:00:09.808 ********** 2026-03-23 00:58:42.269475 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:58:42.269481 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:58:42.269486 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:58:42.269492 | orchestrator | 2026-03-23 00:58:42.269498 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-23 00:58:42.269504 | orchestrator | Monday 23 March 2026 00:57:19 +0000 (0:00:00.504) 0:00:10.313 ********** 2026-03-23 00:58:42.269510 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269516 | orchestrator | 2026-03-23 00:58:42.269522 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-23 00:58:42.269528 | orchestrator | Monday 23 March 2026 00:57:19 +0000 (0:00:00.118) 0:00:10.431 ********** 2026-03-23 00:58:42.269541 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269547 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.269553 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.269559 | orchestrator | 2026-03-23 00:58:42.269565 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-23 00:58:42.269571 | orchestrator | Monday 23 March 2026 00:57:19 +0000 (0:00:00.264) 0:00:10.695 ********** 2026-03-23 00:58:42.269576 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:58:42.269582 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:58:42.269589 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:58:42.269595 | orchestrator | 2026-03-23 00:58:42.269601 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-23 00:58:42.269608 | orchestrator | Monday 23 March 2026 00:57:19 +0000 (0:00:00.301) 0:00:10.996 ********** 2026-03-23 00:58:42.269614 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269620 | orchestrator | 2026-03-23 00:58:42.269627 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-23 00:58:42.269633 | orchestrator | Monday 23 March 2026 00:57:20 +0000 (0:00:00.141) 0:00:11.138 ********** 2026-03-23 00:58:42.269639 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269643 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.269647 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.269650 | orchestrator | 2026-03-23 00:58:42.269654 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-23 00:58:42.269658 | orchestrator | Monday 23 March 2026 00:57:20 +0000 (0:00:00.343) 0:00:11.482 ********** 2026-03-23 00:58:42.269662 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:58:42.269666 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:58:42.269670 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:58:42.269673 | orchestrator | 2026-03-23 00:58:42.269677 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-23 00:58:42.269681 | orchestrator | Monday 23 March 2026 00:57:20 +0000 (0:00:00.499) 0:00:11.981 ********** 2026-03-23 00:58:42.269685 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269688 | orchestrator | 2026-03-23 00:58:42.269692 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-23 00:58:42.269696 | orchestrator | Monday 23 March 2026 00:57:21 +0000 (0:00:00.151) 0:00:12.132 ********** 2026-03-23 00:58:42.269700 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269703 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.269707 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.269711 | orchestrator | 2026-03-23 00:58:42.269714 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-23 00:58:42.269718 | orchestrator | Monday 23 March 2026 00:57:21 +0000 (0:00:00.290) 0:00:12.422 ********** 2026-03-23 00:58:42.269722 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:58:42.269726 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:58:42.269729 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:58:42.269733 | orchestrator | 2026-03-23 00:58:42.269737 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-23 00:58:42.269740 | orchestrator | Monday 23 March 2026 00:57:23 +0000 (0:00:01.776) 0:00:14.199 ********** 2026-03-23 00:58:42.269744 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-23 00:58:42.269749 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-23 00:58:42.269752 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-23 00:58:42.269756 | orchestrator | 2026-03-23 00:58:42.269796 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-23 00:58:42.269801 | orchestrator | Monday 23 March 2026 00:57:25 +0000 (0:00:02.352) 0:00:16.551 ********** 2026-03-23 00:58:42.269805 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-23 00:58:42.269816 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-23 00:58:42.269820 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-23 00:58:42.269824 | orchestrator | 2026-03-23 00:58:42.269828 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-23 00:58:42.269836 | orchestrator | Monday 23 March 2026 00:57:27 +0000 (0:00:02.033) 0:00:18.584 ********** 2026-03-23 00:58:42.269840 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-23 00:58:42.269844 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-23 00:58:42.269848 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-23 00:58:42.269876 | orchestrator | 2026-03-23 00:58:42.269880 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-23 00:58:42.269884 | orchestrator | Monday 23 March 2026 00:57:28 +0000 (0:00:01.432) 0:00:20.017 ********** 2026-03-23 00:58:42.269888 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269891 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.269895 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.269899 | orchestrator | 2026-03-23 00:58:42.269902 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-23 00:58:42.269906 | orchestrator | Monday 23 March 2026 00:57:29 +0000 (0:00:00.249) 0:00:20.266 ********** 2026-03-23 00:58:42.269910 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.269914 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.269917 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.269921 | orchestrator | 2026-03-23 00:58:42.269925 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-23 00:58:42.269928 | orchestrator | Monday 23 March 2026 00:57:29 +0000 (0:00:00.254) 0:00:20.521 ********** 2026-03-23 00:58:42.269932 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:58:42.269936 | orchestrator | 2026-03-23 00:58:42.269940 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-23 00:58:42.269943 | orchestrator | Monday 23 March 2026 00:57:30 +0000 (0:00:00.661) 0:00:21.183 ********** 2026-03-23 00:58:42.269949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-23 00:58:42.269966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-23 00:58:42.269971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-23 00:58:42.269978 | orchestrator | 2026-03-23 00:58:42.269985 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-23 00:58:42.269989 | orchestrator | Monday 23 March 2026 00:57:31 +0000 (0:00:01.656) 0:00:22.840 ********** 2026-03-23 00:58:42.269997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-23 00:58:42.270002 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.270063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-23 00:58:42.270073 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.270077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-23 00:58:42.270081 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.270085 | orchestrator | 2026-03-23 00:58:42.270089 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-23 00:58:42.270093 | orchestrator | Monday 23 March 2026 00:57:32 +0000 (0:00:00.769) 0:00:23.610 ********** 2026-03-23 00:58:42.270104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-23 00:58:42.270116 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.270120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-23 00:58:42.270128 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.270139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-23 00:58:42.270144 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.270148 | orchestrator | 2026-03-23 00:58:42.270151 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-23 00:58:42.270155 | orchestrator | Monday 23 March 2026 00:57:33 +0000 (0:00:01.015) 0:00:24.625 ********** 2026-03-23 00:58:42.270159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-23 00:58:42.270174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-23 00:58:42.270179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-23 00:58:42.270188 | orchestrator | 2026-03-23 00:58:42.270192 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-23 00:58:42.270196 | orchestrator | Monday 23 March 2026 00:57:34 +0000 (0:00:01.364) 0:00:25.990 ********** 2026-03-23 00:58:42.270199 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:58:42.270203 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:58:42.270207 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:58:42.270211 | orchestrator | 2026-03-23 00:58:42.270214 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-23 00:58:42.270221 | orchestrator | Monday 23 March 2026 00:57:35 +0000 (0:00:00.328) 0:00:26.318 ********** 2026-03-23 00:58:42.270225 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:58:42.270228 | orchestrator | 2026-03-23 00:58:42.270232 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-23 00:58:42.270239 | orchestrator | Monday 23 March 2026 00:57:35 +0000 (0:00:00.712) 0:00:27.031 ********** 2026-03-23 00:58:42.270242 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:58:42.270246 | orchestrator | 2026-03-23 00:58:42.270250 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-23 00:58:42.270254 | orchestrator | Monday 23 March 2026 00:57:38 +0000 (0:00:02.347) 0:00:29.378 ********** 2026-03-23 00:58:42.270257 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:58:42.270261 | orchestrator | 2026-03-23 00:58:42.270265 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-23 00:58:42.270269 | orchestrator | Monday 23 March 2026 00:57:41 +0000 (0:00:02.819) 0:00:32.198 ********** 2026-03-23 00:58:42.270272 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:58:42.270276 | orchestrator | 2026-03-23 00:58:42.270280 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-23 00:58:42.270284 | orchestrator | Monday 23 March 2026 00:57:57 +0000 (0:00:16.285) 0:00:48.483 ********** 2026-03-23 00:58:42.270288 | orchestrator | 2026-03-23 00:58:42.270291 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-23 00:58:42.270295 | orchestrator | Monday 23 March 2026 00:57:57 +0000 (0:00:00.065) 0:00:48.549 ********** 2026-03-23 00:58:42.270299 | orchestrator | 2026-03-23 00:58:42.270310 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-23 00:58:42.270314 | orchestrator | Monday 23 March 2026 00:57:57 +0000 (0:00:00.063) 0:00:48.612 ********** 2026-03-23 00:58:42.270318 | orchestrator | 2026-03-23 00:58:42.270321 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-23 00:58:42.270325 | orchestrator | Monday 23 March 2026 00:57:57 +0000 (0:00:00.065) 0:00:48.678 ********** 2026-03-23 00:58:42.270329 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:58:42.270332 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:58:42.270346 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:58:42.270350 | orchestrator | 2026-03-23 00:58:42.270354 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:58:42.270358 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-23 00:58:42.270362 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-23 00:58:42.270366 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-23 00:58:42.270370 | orchestrator | 2026-03-23 00:58:42.270373 | orchestrator | 2026-03-23 00:58:42.270377 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:58:42.270381 | orchestrator | Monday 23 March 2026 00:58:39 +0000 (0:00:42.235) 0:01:30.913 ********** 2026-03-23 00:58:42.270385 | orchestrator | =============================================================================== 2026-03-23 00:58:42.270389 | orchestrator | horizon : Restart horizon container ------------------------------------ 42.24s 2026-03-23 00:58:42.270392 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.29s 2026-03-23 00:58:42.270396 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.82s 2026-03-23 00:58:42.270400 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.35s 2026-03-23 00:58:42.270404 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.35s 2026-03-23 00:58:42.270407 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.03s 2026-03-23 00:58:42.270411 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.98s 2026-03-23 00:58:42.270415 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.78s 2026-03-23 00:58:42.270419 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.66s 2026-03-23 00:58:42.270422 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.43s 2026-03-23 00:58:42.270426 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.36s 2026-03-23 00:58:42.270430 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.02s 2026-03-23 00:58:42.270433 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.77s 2026-03-23 00:58:42.270437 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2026-03-23 00:58:42.270441 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2026-03-23 00:58:42.270444 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2026-03-23 00:58:42.270448 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2026-03-23 00:58:42.270452 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2026-03-23 00:58:42.270456 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2026-03-23 00:58:42.270459 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.44s 2026-03-23 00:58:42.271054 | orchestrator | 2026-03-23 00:58:42 | INFO  | Task 7dc61cf6-f90d-4406-9883-a7deeed609c6 is in state SUCCESS 2026-03-23 00:58:42.271073 | orchestrator | 2026-03-23 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:45.321949 | orchestrator | 2026-03-23 00:58:45 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:45.323255 | orchestrator | 2026-03-23 00:58:45 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:58:45.323283 | orchestrator | 2026-03-23 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:48.378715 | orchestrator | 2026-03-23 00:58:48 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:48.379681 | orchestrator | 2026-03-23 00:58:48 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:58:48.379730 | orchestrator | 2026-03-23 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:51.516948 | orchestrator | 2026-03-23 00:58:51 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:51.518507 | orchestrator | 2026-03-23 00:58:51 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:58:51.519507 | orchestrator | 2026-03-23 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:54.560894 | orchestrator | 2026-03-23 00:58:54 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:54.562438 | orchestrator | 2026-03-23 00:58:54 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:58:54.562477 | orchestrator | 2026-03-23 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:58:57.606125 | orchestrator | 2026-03-23 00:58:57 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:58:57.608096 | orchestrator | 2026-03-23 00:58:57 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:58:57.608430 | orchestrator | 2026-03-23 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:00.648460 | orchestrator | 2026-03-23 00:59:00 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:00.649481 | orchestrator | 2026-03-23 00:59:00 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:00.649517 | orchestrator | 2026-03-23 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:03.698069 | orchestrator | 2026-03-23 00:59:03 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:03.700115 | orchestrator | 2026-03-23 00:59:03 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:03.700172 | orchestrator | 2026-03-23 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:06.745126 | orchestrator | 2026-03-23 00:59:06 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:06.747115 | orchestrator | 2026-03-23 00:59:06 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:06.747171 | orchestrator | 2026-03-23 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:09.789969 | orchestrator | 2026-03-23 00:59:09 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:09.791262 | orchestrator | 2026-03-23 00:59:09 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:09.791303 | orchestrator | 2026-03-23 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:12.831533 | orchestrator | 2026-03-23 00:59:12 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:12.835671 | orchestrator | 2026-03-23 00:59:12 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:12.835731 | orchestrator | 2026-03-23 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:15.881030 | orchestrator | 2026-03-23 00:59:15 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:15.882835 | orchestrator | 2026-03-23 00:59:15 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:15.883007 | orchestrator | 2026-03-23 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:18.925390 | orchestrator | 2026-03-23 00:59:18 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:18.927028 | orchestrator | 2026-03-23 00:59:18 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:18.927085 | orchestrator | 2026-03-23 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:21.973536 | orchestrator | 2026-03-23 00:59:21 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:21.976276 | orchestrator | 2026-03-23 00:59:21 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:21.976319 | orchestrator | 2026-03-23 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:25.022656 | orchestrator | 2026-03-23 00:59:25 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:25.024282 | orchestrator | 2026-03-23 00:59:25 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:25.024342 | orchestrator | 2026-03-23 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:28.065748 | orchestrator | 2026-03-23 00:59:28 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:28.067171 | orchestrator | 2026-03-23 00:59:28 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:28.067248 | orchestrator | 2026-03-23 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:31.104561 | orchestrator | 2026-03-23 00:59:31 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:31.105748 | orchestrator | 2026-03-23 00:59:31 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:31.105816 | orchestrator | 2026-03-23 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:34.142677 | orchestrator | 2026-03-23 00:59:34 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:34.144842 | orchestrator | 2026-03-23 00:59:34 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:34.145137 | orchestrator | 2026-03-23 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:37.185364 | orchestrator | 2026-03-23 00:59:37 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:37.187190 | orchestrator | 2026-03-23 00:59:37 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:37.187257 | orchestrator | 2026-03-23 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:40.236050 | orchestrator | 2026-03-23 00:59:40 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:40.238122 | orchestrator | 2026-03-23 00:59:40 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state STARTED 2026-03-23 00:59:40.238161 | orchestrator | 2026-03-23 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:43.288034 | orchestrator | 2026-03-23 00:59:43 | INFO  | Task c8b34124-a94c-4e33-b20d-d6766b3a2bd4 is in state STARTED 2026-03-23 00:59:43.289480 | orchestrator | 2026-03-23 00:59:43 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:43.291203 | orchestrator | 2026-03-23 00:59:43 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 00:59:43.293066 | orchestrator | 2026-03-23 00:59:43 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 00:59:43.294817 | orchestrator | 2026-03-23 00:59:43 | INFO  | Task 2d86b3f7-e57d-4143-a8bb-17c5c2b48395 is in state SUCCESS 2026-03-23 00:59:43.295101 | orchestrator | 2026-03-23 00:59:43.295117 | orchestrator | 2026-03-23 00:59:43.295121 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-23 00:59:43.295124 | orchestrator | 2026-03-23 00:59:43.295128 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-23 00:59:43.295133 | orchestrator | Monday 23 March 2026 00:58:07 +0000 (0:00:00.194) 0:00:00.194 ********** 2026-03-23 00:59:43.295138 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-23 00:59:43.295143 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-23 00:59:43.295148 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-23 00:59:43.295153 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-23 00:59:43.295158 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-23 00:59:43.295163 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-23 00:59:43.295168 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-23 00:59:43.295173 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-23 00:59:43.295187 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-23 00:59:43.295192 | orchestrator | 2026-03-23 00:59:43.295197 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-23 00:59:43.295200 | orchestrator | Monday 23 March 2026 00:58:11 +0000 (0:00:04.674) 0:00:04.868 ********** 2026-03-23 00:59:43.295203 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-23 00:59:43.295206 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-23 00:59:43.295210 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-23 00:59:43.295214 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-23 00:59:43.295219 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-23 00:59:43.295225 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-23 00:59:43.295230 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-23 00:59:43.295236 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-23 00:59:43.295241 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-23 00:59:43.295246 | orchestrator | 2026-03-23 00:59:43.295251 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-23 00:59:43.295256 | orchestrator | Monday 23 March 2026 00:58:16 +0000 (0:00:04.550) 0:00:09.418 ********** 2026-03-23 00:59:43.295261 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-23 00:59:43.295267 | orchestrator | 2026-03-23 00:59:43.295271 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-23 00:59:43.295274 | orchestrator | Monday 23 March 2026 00:58:17 +0000 (0:00:00.996) 0:00:10.415 ********** 2026-03-23 00:59:43.295277 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-23 00:59:43.295280 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-23 00:59:43.295284 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-23 00:59:43.295295 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-23 00:59:43.295298 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-23 00:59:43.295301 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-23 00:59:43.295306 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-23 00:59:43.295310 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-23 00:59:43.295316 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-23 00:59:43.295321 | orchestrator | 2026-03-23 00:59:43.295327 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-23 00:59:43.295332 | orchestrator | Monday 23 March 2026 00:58:31 +0000 (0:00:13.991) 0:00:24.407 ********** 2026-03-23 00:59:43.295337 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-23 00:59:43.295343 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-23 00:59:43.295348 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-23 00:59:43.295353 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-23 00:59:43.295362 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-23 00:59:43.295365 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-23 00:59:43.295368 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-23 00:59:43.295371 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-23 00:59:43.295374 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-23 00:59:43.295377 | orchestrator | 2026-03-23 00:59:43.295380 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-23 00:59:43.295383 | orchestrator | Monday 23 March 2026 00:58:34 +0000 (0:00:03.113) 0:00:27.521 ********** 2026-03-23 00:59:43.295386 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-23 00:59:43.295389 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-23 00:59:43.295392 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-23 00:59:43.295396 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-23 00:59:43.295399 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-23 00:59:43.295402 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-23 00:59:43.295405 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-23 00:59:43.295410 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-23 00:59:43.295413 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-23 00:59:43.295416 | orchestrator | 2026-03-23 00:59:43.295419 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:59:43.295422 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:59:43.295426 | orchestrator | 2026-03-23 00:59:43.295429 | orchestrator | 2026-03-23 00:59:43.295432 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:59:43.295435 | orchestrator | Monday 23 March 2026 00:58:41 +0000 (0:00:07.040) 0:00:34.561 ********** 2026-03-23 00:59:43.295438 | orchestrator | =============================================================================== 2026-03-23 00:59:43.295445 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.99s 2026-03-23 00:59:43.295448 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.04s 2026-03-23 00:59:43.295451 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.67s 2026-03-23 00:59:43.295454 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.55s 2026-03-23 00:59:43.295457 | orchestrator | Check if target directories exist --------------------------------------- 3.11s 2026-03-23 00:59:43.295460 | orchestrator | Create share directory -------------------------------------------------- 1.00s 2026-03-23 00:59:43.295463 | orchestrator | 2026-03-23 00:59:43.295466 | orchestrator | 2026-03-23 00:59:43.295469 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-23 00:59:43.295472 | orchestrator | 2026-03-23 00:59:43.295475 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-23 00:59:43.295478 | orchestrator | Monday 23 March 2026 00:58:45 +0000 (0:00:00.309) 0:00:00.309 ********** 2026-03-23 00:59:43.295481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-23 00:59:43.295485 | orchestrator | 2026-03-23 00:59:43.295488 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-23 00:59:43.295491 | orchestrator | Monday 23 March 2026 00:58:45 +0000 (0:00:00.234) 0:00:00.543 ********** 2026-03-23 00:59:43.295494 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-23 00:59:43.295497 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-23 00:59:43.295500 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-23 00:59:43.295503 | orchestrator | 2026-03-23 00:59:43.295506 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-23 00:59:43.295509 | orchestrator | Monday 23 March 2026 00:58:46 +0000 (0:00:01.577) 0:00:02.121 ********** 2026-03-23 00:59:43.295512 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-23 00:59:43.295515 | orchestrator | 2026-03-23 00:59:43.295518 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-23 00:59:43.295521 | orchestrator | Monday 23 March 2026 00:58:48 +0000 (0:00:01.196) 0:00:03.317 ********** 2026-03-23 00:59:43.295524 | orchestrator | changed: [testbed-manager] 2026-03-23 00:59:43.295528 | orchestrator | 2026-03-23 00:59:43.295533 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-23 00:59:43.295537 | orchestrator | Monday 23 March 2026 00:58:48 +0000 (0:00:00.901) 0:00:04.218 ********** 2026-03-23 00:59:43.295542 | orchestrator | changed: [testbed-manager] 2026-03-23 00:59:43.295547 | orchestrator | 2026-03-23 00:59:43.295552 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-23 00:59:43.295557 | orchestrator | Monday 23 March 2026 00:58:49 +0000 (0:00:00.881) 0:00:05.100 ********** 2026-03-23 00:59:43.295561 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-23 00:59:43.295566 | orchestrator | ok: [testbed-manager] 2026-03-23 00:59:43.295571 | orchestrator | 2026-03-23 00:59:43.295575 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-23 00:59:43.295584 | orchestrator | Monday 23 March 2026 00:59:31 +0000 (0:00:42.040) 0:00:47.141 ********** 2026-03-23 00:59:43.295589 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-23 00:59:43.295594 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-23 00:59:43.295599 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-23 00:59:43.295604 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-23 00:59:43.295609 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-23 00:59:43.295614 | orchestrator | 2026-03-23 00:59:43.295619 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-23 00:59:43.295629 | orchestrator | Monday 23 March 2026 00:59:35 +0000 (0:00:03.623) 0:00:50.764 ********** 2026-03-23 00:59:43.295632 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-23 00:59:43.295635 | orchestrator | 2026-03-23 00:59:43.295640 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-23 00:59:43.295645 | orchestrator | Monday 23 March 2026 00:59:36 +0000 (0:00:00.496) 0:00:51.260 ********** 2026-03-23 00:59:43.295650 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:59:43.295655 | orchestrator | 2026-03-23 00:59:43.295660 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-23 00:59:43.295665 | orchestrator | Monday 23 March 2026 00:59:36 +0000 (0:00:00.125) 0:00:51.386 ********** 2026-03-23 00:59:43.295670 | orchestrator | skipping: [testbed-manager] 2026-03-23 00:59:43.295674 | orchestrator | 2026-03-23 00:59:43.295679 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-23 00:59:43.295684 | orchestrator | Monday 23 March 2026 00:59:36 +0000 (0:00:00.291) 0:00:51.678 ********** 2026-03-23 00:59:43.295697 | orchestrator | changed: [testbed-manager] 2026-03-23 00:59:43.295703 | orchestrator | 2026-03-23 00:59:43.295707 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-23 00:59:43.295713 | orchestrator | Monday 23 March 2026 00:59:37 +0000 (0:00:01.423) 0:00:53.101 ********** 2026-03-23 00:59:43.295718 | orchestrator | changed: [testbed-manager] 2026-03-23 00:59:43.295723 | orchestrator | 2026-03-23 00:59:43.295728 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-23 00:59:43.295733 | orchestrator | Monday 23 March 2026 00:59:38 +0000 (0:00:00.693) 0:00:53.795 ********** 2026-03-23 00:59:43.295739 | orchestrator | changed: [testbed-manager] 2026-03-23 00:59:43.295744 | orchestrator | 2026-03-23 00:59:43.295750 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-23 00:59:43.295755 | orchestrator | Monday 23 March 2026 00:59:39 +0000 (0:00:00.578) 0:00:54.373 ********** 2026-03-23 00:59:43.295786 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-23 00:59:43.295792 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-23 00:59:43.295798 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-23 00:59:43.295803 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-23 00:59:43.295809 | orchestrator | 2026-03-23 00:59:43.295814 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:59:43.295820 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 00:59:43.295826 | orchestrator | 2026-03-23 00:59:43.295831 | orchestrator | 2026-03-23 00:59:43.295837 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:59:43.295842 | orchestrator | Monday 23 March 2026 00:59:40 +0000 (0:00:01.496) 0:00:55.870 ********** 2026-03-23 00:59:43.295848 | orchestrator | =============================================================================== 2026-03-23 00:59:43.295852 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.04s 2026-03-23 00:59:43.295855 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.62s 2026-03-23 00:59:43.295859 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.58s 2026-03-23 00:59:43.295863 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.50s 2026-03-23 00:59:43.295866 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.42s 2026-03-23 00:59:43.295870 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.20s 2026-03-23 00:59:43.295873 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.90s 2026-03-23 00:59:43.295877 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.88s 2026-03-23 00:59:43.295880 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2026-03-23 00:59:43.295884 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2026-03-23 00:59:43.295891 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-03-23 00:59:43.295895 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2026-03-23 00:59:43.295898 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2026-03-23 00:59:43.295902 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-03-23 00:59:43.296202 | orchestrator | 2026-03-23 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:46.323620 | orchestrator | 2026-03-23 00:59:46 | INFO  | Task c8b34124-a94c-4e33-b20d-d6766b3a2bd4 is in state SUCCESS 2026-03-23 00:59:46.324416 | orchestrator | 2026-03-23 00:59:46 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:46.324881 | orchestrator | 2026-03-23 00:59:46 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 00:59:46.325703 | orchestrator | 2026-03-23 00:59:46 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 00:59:46.325797 | orchestrator | 2026-03-23 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:49.398968 | orchestrator | 2026-03-23 00:59:49 | INFO  | Task d06a0500-8700-41ba-8a3b-7e09a6102514 is in state STARTED 2026-03-23 00:59:49.399030 | orchestrator | 2026-03-23 00:59:49 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:49.399903 | orchestrator | 2026-03-23 00:59:49 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 00:59:49.401897 | orchestrator | 2026-03-23 00:59:49 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 00:59:49.403548 | orchestrator | 2026-03-23 00:59:49 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 00:59:49.403580 | orchestrator | 2026-03-23 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:52.434387 | orchestrator | 2026-03-23 00:59:52 | INFO  | Task d06a0500-8700-41ba-8a3b-7e09a6102514 is in state STARTED 2026-03-23 00:59:52.436180 | orchestrator | 2026-03-23 00:59:52 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state STARTED 2026-03-23 00:59:52.436244 | orchestrator | 2026-03-23 00:59:52 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 00:59:52.436253 | orchestrator | 2026-03-23 00:59:52 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 00:59:52.436954 | orchestrator | 2026-03-23 00:59:52 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 00:59:52.436982 | orchestrator | 2026-03-23 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:55.474102 | orchestrator | 2026-03-23 00:59:55 | INFO  | Task d06a0500-8700-41ba-8a3b-7e09a6102514 is in state STARTED 2026-03-23 00:59:55.476733 | orchestrator | 2026-03-23 00:59:55 | INFO  | Task aaeed038-d951-4be3-82c3-c7e6eb8e0080 is in state SUCCESS 2026-03-23 00:59:55.477296 | orchestrator | 2026-03-23 00:59:55.477314 | orchestrator | 2026-03-23 00:59:55.477318 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 00:59:55.477322 | orchestrator | 2026-03-23 00:59:55.477325 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 00:59:55.477329 | orchestrator | Monday 23 March 2026 00:59:44 +0000 (0:00:00.188) 0:00:00.188 ********** 2026-03-23 00:59:55.477332 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:59:55.477336 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:59:55.477339 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:59:55.477343 | orchestrator | 2026-03-23 00:59:55.477346 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 00:59:55.477360 | orchestrator | Monday 23 March 2026 00:59:44 +0000 (0:00:00.332) 0:00:00.521 ********** 2026-03-23 00:59:55.477364 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-23 00:59:55.477367 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-23 00:59:55.477371 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-23 00:59:55.477374 | orchestrator | 2026-03-23 00:59:55.477377 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-23 00:59:55.477380 | orchestrator | 2026-03-23 00:59:55.477383 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-23 00:59:55.477386 | orchestrator | Monday 23 March 2026 00:59:44 +0000 (0:00:00.509) 0:00:01.030 ********** 2026-03-23 00:59:55.477389 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:59:55.477392 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:59:55.477395 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:59:55.477398 | orchestrator | 2026-03-23 00:59:55.477401 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:59:55.477405 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:59:55.477409 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:59:55.477412 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 00:59:55.477415 | orchestrator | 2026-03-23 00:59:55.477418 | orchestrator | 2026-03-23 00:59:55.477422 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:59:55.477425 | orchestrator | Monday 23 March 2026 00:59:46 +0000 (0:00:01.112) 0:00:02.142 ********** 2026-03-23 00:59:55.477428 | orchestrator | =============================================================================== 2026-03-23 00:59:55.477431 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.11s 2026-03-23 00:59:55.477434 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-03-23 00:59:55.477437 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-23 00:59:55.477440 | orchestrator | 2026-03-23 00:59:55.477443 | orchestrator | 2026-03-23 00:59:55.477446 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 00:59:55.477449 | orchestrator | 2026-03-23 00:59:55.477453 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 00:59:55.477456 | orchestrator | Monday 23 March 2026 00:57:09 +0000 (0:00:00.312) 0:00:00.312 ********** 2026-03-23 00:59:55.477459 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:59:55.477462 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:59:55.477465 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:59:55.477468 | orchestrator | 2026-03-23 00:59:55.477471 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 00:59:55.477474 | orchestrator | Monday 23 March 2026 00:57:09 +0000 (0:00:00.283) 0:00:00.596 ********** 2026-03-23 00:59:55.477477 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-23 00:59:55.477480 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-23 00:59:55.477483 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-23 00:59:55.477486 | orchestrator | 2026-03-23 00:59:55.477490 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-23 00:59:55.477493 | orchestrator | 2026-03-23 00:59:55.477496 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-23 00:59:55.477499 | orchestrator | Monday 23 March 2026 00:57:09 +0000 (0:00:00.297) 0:00:00.894 ********** 2026-03-23 00:59:55.477502 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:59:55.477505 | orchestrator | 2026-03-23 00:59:55.477508 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-23 00:59:55.477514 | orchestrator | Monday 23 March 2026 00:57:10 +0000 (0:00:00.704) 0:00:01.598 ********** 2026-03-23 00:59:55.477532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.477538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.477542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.477546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-23 00:59:55.477550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-23 00:59:55.477559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-23 00:59:55.477565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.477569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.477573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.477576 | orchestrator | 2026-03-23 00:59:55.477579 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-23 00:59:55.477607 | orchestrator | Monday 23 March 2026 00:57:12 +0000 (0:00:02.225) 0:00:03.824 ********** 2026-03-23 00:59:55.477611 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.477614 | orchestrator | 2026-03-23 00:59:55.477619 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-23 00:59:55.477649 | orchestrator | Monday 23 March 2026 00:57:12 +0000 (0:00:00.118) 0:00:03.942 ********** 2026-03-23 00:59:55.477686 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.477692 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:59:55.477695 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:59:55.477698 | orchestrator | 2026-03-23 00:59:55.477701 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-23 00:59:55.477727 | orchestrator | Monday 23 March 2026 00:57:13 +0000 (0:00:00.288) 0:00:04.231 ********** 2026-03-23 00:59:55.477730 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 00:59:55.477737 | orchestrator | 2026-03-23 00:59:55.477779 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-23 00:59:55.477782 | orchestrator | Monday 23 March 2026 00:57:14 +0000 (0:00:00.862) 0:00:05.094 ********** 2026-03-23 00:59:55.477785 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:59:55.477788 | orchestrator | 2026-03-23 00:59:55.477792 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-23 00:59:55.477795 | orchestrator | Monday 23 March 2026 00:57:14 +0000 (0:00:00.639) 0:00:05.733 ********** 2026-03-23 00:59:55.477802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.477813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.477822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.477828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478095 | orchestrator | 2026-03-23 00:59:55.478100 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-23 00:59:55.478106 | orchestrator | Monday 23 March 2026 00:57:17 +0000 (0:00:02.869) 0:00:08.602 ********** 2026-03-23 00:59:55.478112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-23 00:59:55.478123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:59:55.478131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:59:55.478136 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.478147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-23 00:59:55.478153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:59:55.478158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:59:55.478168 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:59:55.478173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-23 00:59:55.478181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:59:55.478189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:59:55.478194 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:59:55.478200 | orchestrator | 2026-03-23 00:59:55.478206 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-23 00:59:55.478212 | orchestrator | Monday 23 March 2026 00:57:18 +0000 (0:00:00.547) 0:00:09.150 ********** 2026-03-23 00:59:55.478218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-23 00:59:55.478225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:59:55.478229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:59:55.478232 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.478248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-23 00:59:55.478255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:59:55.478258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:59:55.478261 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:59:55.478265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-23 00:59:55.478270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:59:55.478274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:59:55.478277 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:59:55.478280 | orchestrator | 2026-03-23 00:59:55.478283 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-23 00:59:55.478286 | orchestrator | Monday 23 March 2026 00:57:19 +0000 (0:00:00.961) 0:00:10.112 ********** 2026-03-23 00:59:55.478293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.478297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.478303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.478306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478332 | orchestrator | 2026-03-23 00:59:55.478335 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-23 00:59:55.478338 | orchestrator | Monday 23 March 2026 00:57:22 +0000 (0:00:03.581) 0:00:13.693 ********** 2026-03-23 00:59:55.478341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.478347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:59:55.478352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.478356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:59:55.478362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.478365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:59:55.478368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478431 | orchestrator | 2026-03-23 00:59:55.478438 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-23 00:59:55.478443 | orchestrator | Monday 23 March 2026 00:57:27 +0000 (0:00:04.861) 0:00:18.554 ********** 2026-03-23 00:59:55.478448 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:59:55.478453 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:59:55.478458 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:59:55.478463 | orchestrator | 2026-03-23 00:59:55.478468 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-23 00:59:55.478473 | orchestrator | Monday 23 March 2026 00:57:28 +0000 (0:00:01.435) 0:00:19.989 ********** 2026-03-23 00:59:55.478479 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.478482 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:59:55.478485 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:59:55.478488 | orchestrator | 2026-03-23 00:59:55.478491 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-23 00:59:55.478494 | orchestrator | Monday 23 March 2026 00:57:29 +0000 (0:00:00.775) 0:00:20.765 ********** 2026-03-23 00:59:55.478497 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.478500 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:59:55.478503 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:59:55.478506 | orchestrator | 2026-03-23 00:59:55.478509 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-23 00:59:55.478512 | orchestrator | Monday 23 March 2026 00:57:29 +0000 (0:00:00.258) 0:00:21.023 ********** 2026-03-23 00:59:55.478516 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.478519 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:59:55.478522 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:59:55.478525 | orchestrator | 2026-03-23 00:59:55.478528 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-23 00:59:55.478531 | orchestrator | Monday 23 March 2026 00:57:30 +0000 (0:00:00.303) 0:00:21.327 ********** 2026-03-23 00:59:55.478534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-23 00:59:55.478538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:59:55.478544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:59:55.478550 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:59:55.478557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-23 00:59:55.478560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:59:55.478564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:59:55.478567 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.478570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-23 00:59:55.478575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-23 00:59:55.478583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-23 00:59:55.478587 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:59:55.478590 | orchestrator | 2026-03-23 00:59:55.478593 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-23 00:59:55.478596 | orchestrator | Monday 23 March 2026 00:57:30 +0000 (0:00:00.631) 0:00:21.959 ********** 2026-03-23 00:59:55.478599 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.478602 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:59:55.478605 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:59:55.478608 | orchestrator | 2026-03-23 00:59:55.478611 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-23 00:59:55.478614 | orchestrator | Monday 23 March 2026 00:57:31 +0000 (0:00:00.454) 0:00:22.414 ********** 2026-03-23 00:59:55.478618 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-23 00:59:55.478621 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-23 00:59:55.478624 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-23 00:59:55.478627 | orchestrator | 2026-03-23 00:59:55.478631 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-23 00:59:55.478634 | orchestrator | Monday 23 March 2026 00:57:33 +0000 (0:00:01.663) 0:00:24.077 ********** 2026-03-23 00:59:55.478637 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 00:59:55.478640 | orchestrator | 2026-03-23 00:59:55.478643 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-23 00:59:55.478646 | orchestrator | Monday 23 March 2026 00:57:33 +0000 (0:00:00.901) 0:00:24.979 ********** 2026-03-23 00:59:55.478649 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.478654 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:59:55.478659 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:59:55.478664 | orchestrator | 2026-03-23 00:59:55.478668 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-23 00:59:55.478673 | orchestrator | Monday 23 March 2026 00:57:34 +0000 (0:00:00.538) 0:00:25.518 ********** 2026-03-23 00:59:55.478679 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-23 00:59:55.478686 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-23 00:59:55.478691 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 00:59:55.478710 | orchestrator | 2026-03-23 00:59:55.478715 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-23 00:59:55.478720 | orchestrator | Monday 23 March 2026 00:57:35 +0000 (0:00:01.096) 0:00:26.615 ********** 2026-03-23 00:59:55.478724 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:59:55.478729 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:59:55.478734 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:59:55.478751 | orchestrator | 2026-03-23 00:59:55.478756 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-23 00:59:55.478761 | orchestrator | Monday 23 March 2026 00:57:36 +0000 (0:00:00.451) 0:00:27.066 ********** 2026-03-23 00:59:55.478769 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-23 00:59:55.478774 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-23 00:59:55.478779 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-23 00:59:55.478783 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-23 00:59:55.478787 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-23 00:59:55.478792 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-23 00:59:55.478797 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-23 00:59:55.478801 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-23 00:59:55.478806 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-23 00:59:55.478811 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-23 00:59:55.478815 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-23 00:59:55.478825 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-23 00:59:55.478830 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-23 00:59:55.478834 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-23 00:59:55.478839 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-23 00:59:55.478844 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-23 00:59:55.478849 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-23 00:59:55.478858 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-23 00:59:55.478863 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-23 00:59:55.478867 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-23 00:59:55.478871 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-23 00:59:55.478875 | orchestrator | 2026-03-23 00:59:55.478879 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-23 00:59:55.478884 | orchestrator | Monday 23 March 2026 00:57:45 +0000 (0:00:08.998) 0:00:36.065 ********** 2026-03-23 00:59:55.478888 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-23 00:59:55.478892 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-23 00:59:55.478896 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-23 00:59:55.478901 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-23 00:59:55.478905 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-23 00:59:55.478910 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-23 00:59:55.478914 | orchestrator | 2026-03-23 00:59:55.478919 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-23 00:59:55.478924 | orchestrator | Monday 23 March 2026 00:57:47 +0000 (0:00:02.431) 0:00:38.496 ********** 2026-03-23 00:59:55.478929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.478939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.478951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-23 00:59:55.478956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-23 00:59:55.478992 | orchestrator | 2026-03-23 00:59:55.478996 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-23 00:59:55.479000 | orchestrator | Monday 23 March 2026 00:57:49 +0000 (0:00:02.449) 0:00:40.945 ********** 2026-03-23 00:59:55.479005 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.479010 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:59:55.479015 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:59:55.479020 | orchestrator | 2026-03-23 00:59:55.479027 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-23 00:59:55.479032 | orchestrator | Monday 23 March 2026 00:57:50 +0000 (0:00:00.505) 0:00:41.451 ********** 2026-03-23 00:59:55.479037 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:59:55.479042 | orchestrator | 2026-03-23 00:59:55.479047 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-23 00:59:55.479051 | orchestrator | Monday 23 March 2026 00:57:53 +0000 (0:00:02.598) 0:00:44.050 ********** 2026-03-23 00:59:55.479056 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:59:55.479061 | orchestrator | 2026-03-23 00:59:55.479066 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-23 00:59:55.479071 | orchestrator | Monday 23 March 2026 00:57:55 +0000 (0:00:02.825) 0:00:46.876 ********** 2026-03-23 00:59:55.479080 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:59:55.479085 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:59:55.479090 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:59:55.479096 | orchestrator | 2026-03-23 00:59:55.479102 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-23 00:59:55.479108 | orchestrator | Monday 23 March 2026 00:57:56 +0000 (0:00:00.781) 0:00:47.657 ********** 2026-03-23 00:59:55.479113 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:59:55.479118 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:59:55.479123 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:59:55.479127 | orchestrator | 2026-03-23 00:59:55.479132 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-23 00:59:55.479137 | orchestrator | Monday 23 March 2026 00:57:56 +0000 (0:00:00.298) 0:00:47.956 ********** 2026-03-23 00:59:55.479142 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.479147 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:59:55.479151 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:59:55.479155 | orchestrator | 2026-03-23 00:59:55.479160 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-23 00:59:55.479166 | orchestrator | Monday 23 March 2026 00:57:57 +0000 (0:00:00.305) 0:00:48.262 ********** 2026-03-23 00:59:55.479172 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:59:55.479177 | orchestrator | 2026-03-23 00:59:55.479182 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-23 00:59:55.479188 | orchestrator | Monday 23 March 2026 00:58:11 +0000 (0:00:14.727) 0:01:02.989 ********** 2026-03-23 00:59:55.479193 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:59:55.479199 | orchestrator | 2026-03-23 00:59:55.479206 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-23 00:59:55.479212 | orchestrator | Monday 23 March 2026 00:58:23 +0000 (0:00:11.700) 0:01:14.690 ********** 2026-03-23 00:59:55.479217 | orchestrator | 2026-03-23 00:59:55.479223 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-23 00:59:55.479228 | orchestrator | Monday 23 March 2026 00:58:23 +0000 (0:00:00.065) 0:01:14.756 ********** 2026-03-23 00:59:55.479234 | orchestrator | 2026-03-23 00:59:55.479240 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-23 00:59:55.479246 | orchestrator | Monday 23 March 2026 00:58:23 +0000 (0:00:00.063) 0:01:14.819 ********** 2026-03-23 00:59:55.479251 | orchestrator | 2026-03-23 00:59:55.479258 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-23 00:59:55.479263 | orchestrator | Monday 23 March 2026 00:58:23 +0000 (0:00:00.063) 0:01:14.882 ********** 2026-03-23 00:59:55.479270 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:59:55.479275 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:59:55.479280 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:59:55.479287 | orchestrator | 2026-03-23 00:59:55.479293 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-23 00:59:55.479298 | orchestrator | Monday 23 March 2026 00:58:38 +0000 (0:00:14.848) 0:01:29.730 ********** 2026-03-23 00:59:55.479304 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:59:55.479310 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:59:55.479317 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:59:55.479403 | orchestrator | 2026-03-23 00:59:55.479409 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-23 00:59:55.479414 | orchestrator | Monday 23 March 2026 00:58:48 +0000 (0:00:10.138) 0:01:39.868 ********** 2026-03-23 00:59:55.479419 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:59:55.479424 | orchestrator | changed: [testbed-node-1] 2026-03-23 00:59:55.479459 | orchestrator | changed: [testbed-node-2] 2026-03-23 00:59:55.479465 | orchestrator | 2026-03-23 00:59:55.479471 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-23 00:59:55.479477 | orchestrator | Monday 23 March 2026 00:59:00 +0000 (0:00:11.283) 0:01:51.152 ********** 2026-03-23 00:59:55.479483 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 00:59:55.479495 | orchestrator | 2026-03-23 00:59:55.479500 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-23 00:59:55.479505 | orchestrator | Monday 23 March 2026 00:59:00 +0000 (0:00:00.771) 0:01:51.923 ********** 2026-03-23 00:59:55.479510 | orchestrator | ok: [testbed-node-1] 2026-03-23 00:59:55.479515 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:59:55.479520 | orchestrator | ok: [testbed-node-2] 2026-03-23 00:59:55.479524 | orchestrator | 2026-03-23 00:59:55.479533 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-23 00:59:55.479538 | orchestrator | Monday 23 March 2026 00:59:01 +0000 (0:00:00.809) 0:01:52.732 ********** 2026-03-23 00:59:55.479543 | orchestrator | changed: [testbed-node-0] 2026-03-23 00:59:55.479548 | orchestrator | 2026-03-23 00:59:55.479553 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-23 00:59:55.479559 | orchestrator | Monday 23 March 2026 00:59:03 +0000 (0:00:01.507) 0:01:54.240 ********** 2026-03-23 00:59:55.479562 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-23 00:59:55.479565 | orchestrator | 2026-03-23 00:59:55.479568 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-23 00:59:55.479571 | orchestrator | Monday 23 March 2026 00:59:15 +0000 (0:00:12.300) 0:02:06.540 ********** 2026-03-23 00:59:55.479575 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-23 00:59:55.479578 | orchestrator | 2026-03-23 00:59:55.479588 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-23 00:59:55.479591 | orchestrator | Monday 23 March 2026 00:59:40 +0000 (0:00:25.219) 0:02:31.760 ********** 2026-03-23 00:59:55.479594 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-23 00:59:55.479597 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-23 00:59:55.479600 | orchestrator | 2026-03-23 00:59:55.479603 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-23 00:59:55.479606 | orchestrator | Monday 23 March 2026 00:59:48 +0000 (0:00:07.383) 0:02:39.143 ********** 2026-03-23 00:59:55.479609 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.479612 | orchestrator | 2026-03-23 00:59:55.479615 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-23 00:59:55.479619 | orchestrator | Monday 23 March 2026 00:59:48 +0000 (0:00:00.300) 0:02:39.443 ********** 2026-03-23 00:59:55.479621 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.479625 | orchestrator | 2026-03-23 00:59:55.479628 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-23 00:59:55.479631 | orchestrator | Monday 23 March 2026 00:59:48 +0000 (0:00:00.151) 0:02:39.595 ********** 2026-03-23 00:59:55.479634 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.479637 | orchestrator | 2026-03-23 00:59:55.479640 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-23 00:59:55.479643 | orchestrator | Monday 23 March 2026 00:59:48 +0000 (0:00:00.350) 0:02:39.946 ********** 2026-03-23 00:59:55.479646 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.479649 | orchestrator | 2026-03-23 00:59:55.479652 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-23 00:59:55.479655 | orchestrator | Monday 23 March 2026 00:59:49 +0000 (0:00:00.967) 0:02:40.914 ********** 2026-03-23 00:59:55.479658 | orchestrator | ok: [testbed-node-0] 2026-03-23 00:59:55.479661 | orchestrator | 2026-03-23 00:59:55.479664 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-23 00:59:55.479667 | orchestrator | Monday 23 March 2026 00:59:53 +0000 (0:00:03.320) 0:02:44.234 ********** 2026-03-23 00:59:55.479671 | orchestrator | skipping: [testbed-node-0] 2026-03-23 00:59:55.479675 | orchestrator | skipping: [testbed-node-1] 2026-03-23 00:59:55.479680 | orchestrator | skipping: [testbed-node-2] 2026-03-23 00:59:55.479691 | orchestrator | 2026-03-23 00:59:55.479697 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 00:59:55.479702 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-23 00:59:55.479707 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-23 00:59:55.479712 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-23 00:59:55.479717 | orchestrator | 2026-03-23 00:59:55.479721 | orchestrator | 2026-03-23 00:59:55.479726 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 00:59:55.479731 | orchestrator | Monday 23 March 2026 00:59:54 +0000 (0:00:01.164) 0:02:45.398 ********** 2026-03-23 00:59:55.479735 | orchestrator | =============================================================================== 2026-03-23 00:59:55.479753 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.22s 2026-03-23 00:59:55.479759 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.85s 2026-03-23 00:59:55.479764 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.73s 2026-03-23 00:59:55.479769 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.30s 2026-03-23 00:59:55.479774 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.70s 2026-03-23 00:59:55.479779 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.28s 2026-03-23 00:59:55.479784 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.14s 2026-03-23 00:59:55.479789 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.00s 2026-03-23 00:59:55.479794 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.38s 2026-03-23 00:59:55.479800 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.86s 2026-03-23 00:59:55.479804 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.58s 2026-03-23 00:59:55.479809 | orchestrator | keystone : Creating default user role ----------------------------------- 3.32s 2026-03-23 00:59:55.479814 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.87s 2026-03-23 00:59:55.479823 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.83s 2026-03-23 00:59:55.479828 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.60s 2026-03-23 00:59:55.479834 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.45s 2026-03-23 00:59:55.479839 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.43s 2026-03-23 00:59:55.479845 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.23s 2026-03-23 00:59:55.479848 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.66s 2026-03-23 00:59:55.479851 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.51s 2026-03-23 00:59:55.479858 | orchestrator | 2026-03-23 00:59:55 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 00:59:55.479861 | orchestrator | 2026-03-23 00:59:55 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 00:59:55.479864 | orchestrator | 2026-03-23 00:59:55 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 00:59:55.479868 | orchestrator | 2026-03-23 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-03-23 00:59:58.510311 | orchestrator | 2026-03-23 00:59:58 | INFO  | Task d06a0500-8700-41ba-8a3b-7e09a6102514 is in state STARTED 2026-03-23 00:59:58.510367 | orchestrator | 2026-03-23 00:59:58 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 00:59:58.511922 | orchestrator | 2026-03-23 00:59:58 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 00:59:58.512504 | orchestrator | 2026-03-23 00:59:58 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 00:59:58.513373 | orchestrator | 2026-03-23 00:59:58 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 00:59:58.513770 | orchestrator | 2026-03-23 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:01.543659 | orchestrator | 2026-03-23 01:00:01 | INFO  | Task d06a0500-8700-41ba-8a3b-7e09a6102514 is in state STARTED 2026-03-23 01:00:01.544833 | orchestrator | 2026-03-23 01:00:01 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:01.545257 | orchestrator | 2026-03-23 01:00:01 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:01.546067 | orchestrator | 2026-03-23 01:00:01 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 01:00:01.547935 | orchestrator | 2026-03-23 01:00:01 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:01.547975 | orchestrator | 2026-03-23 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:04.591535 | orchestrator | 2026-03-23 01:00:04 | INFO  | Task d06a0500-8700-41ba-8a3b-7e09a6102514 is in state STARTED 2026-03-23 01:00:04.592343 | orchestrator | 2026-03-23 01:00:04 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:04.593600 | orchestrator | 2026-03-23 01:00:04 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:04.595008 | orchestrator | 2026-03-23 01:00:04 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 01:00:04.595591 | orchestrator | 2026-03-23 01:00:04 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:04.595864 | orchestrator | 2026-03-23 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:07.639642 | orchestrator | 2026-03-23 01:00:07 | INFO  | Task d06a0500-8700-41ba-8a3b-7e09a6102514 is in state STARTED 2026-03-23 01:00:07.639699 | orchestrator | 2026-03-23 01:00:07 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:07.640494 | orchestrator | 2026-03-23 01:00:07 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:07.641399 | orchestrator | 2026-03-23 01:00:07 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 01:00:07.642313 | orchestrator | 2026-03-23 01:00:07 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:07.642345 | orchestrator | 2026-03-23 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:10.824343 | orchestrator | 2026-03-23 01:00:10 | INFO  | Task d06a0500-8700-41ba-8a3b-7e09a6102514 is in state STARTED 2026-03-23 01:00:10.824391 | orchestrator | 2026-03-23 01:00:10 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:10.824406 | orchestrator | 2026-03-23 01:00:10 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:10.824410 | orchestrator | 2026-03-23 01:00:10 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 01:00:10.824414 | orchestrator | 2026-03-23 01:00:10 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:10.824418 | orchestrator | 2026-03-23 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:13.704324 | orchestrator | 2026-03-23 01:00:13 | INFO  | Task d06a0500-8700-41ba-8a3b-7e09a6102514 is in state STARTED 2026-03-23 01:00:13.704882 | orchestrator | 2026-03-23 01:00:13 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:13.705614 | orchestrator | 2026-03-23 01:00:13 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:13.706268 | orchestrator | 2026-03-23 01:00:13 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 01:00:13.707009 | orchestrator | 2026-03-23 01:00:13 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:13.707084 | orchestrator | 2026-03-23 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:16.729486 | orchestrator | 2026-03-23 01:00:16 | INFO  | Task d06a0500-8700-41ba-8a3b-7e09a6102514 is in state STARTED 2026-03-23 01:00:16.729997 | orchestrator | 2026-03-23 01:00:16 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:16.730828 | orchestrator | 2026-03-23 01:00:16 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:16.731451 | orchestrator | 2026-03-23 01:00:16 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 01:00:16.732195 | orchestrator | 2026-03-23 01:00:16 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:16.732217 | orchestrator | 2026-03-23 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:19.760242 | orchestrator | 2026-03-23 01:00:19 | INFO  | Task d06a0500-8700-41ba-8a3b-7e09a6102514 is in state STARTED 2026-03-23 01:00:19.760294 | orchestrator | 2026-03-23 01:00:19 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:19.760299 | orchestrator | 2026-03-23 01:00:19 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:19.760303 | orchestrator | 2026-03-23 01:00:19 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 01:00:19.760307 | orchestrator | 2026-03-23 01:00:19 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:19.760311 | orchestrator | 2026-03-23 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:22.783844 | orchestrator | 2026-03-23 01:00:22 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:00:22.788106 | orchestrator | 2026-03-23 01:00:22 | INFO  | Task d06a0500-8700-41ba-8a3b-7e09a6102514 is in state SUCCESS 2026-03-23 01:00:22.791377 | orchestrator | 2026-03-23 01:00:22 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:22.793084 | orchestrator | 2026-03-23 01:00:22 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:22.795519 | orchestrator | 2026-03-23 01:00:22 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 01:00:22.797406 | orchestrator | 2026-03-23 01:00:22 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:22.797430 | orchestrator | 2026-03-23 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:25.823235 | orchestrator | 2026-03-23 01:00:25 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:00:25.823341 | orchestrator | 2026-03-23 01:00:25 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:25.825263 | orchestrator | 2026-03-23 01:00:25 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:25.825993 | orchestrator | 2026-03-23 01:00:25 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 01:00:25.827565 | orchestrator | 2026-03-23 01:00:25 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:25.827612 | orchestrator | 2026-03-23 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:28.866197 | orchestrator | 2026-03-23 01:00:28 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:00:28.867421 | orchestrator | 2026-03-23 01:00:28 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:28.868801 | orchestrator | 2026-03-23 01:00:28 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:28.870099 | orchestrator | 2026-03-23 01:00:28 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 01:00:28.871046 | orchestrator | 2026-03-23 01:00:28 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:28.871078 | orchestrator | 2026-03-23 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:31.912852 | orchestrator | 2026-03-23 01:00:31 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:00:31.913609 | orchestrator | 2026-03-23 01:00:31 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:31.914495 | orchestrator | 2026-03-23 01:00:31 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:31.915331 | orchestrator | 2026-03-23 01:00:31 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state STARTED 2026-03-23 01:00:31.916985 | orchestrator | 2026-03-23 01:00:31 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:31.917024 | orchestrator | 2026-03-23 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:34.944487 | orchestrator | 2026-03-23 01:00:34 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:00:34.944759 | orchestrator | 2026-03-23 01:00:34 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:34.945546 | orchestrator | 2026-03-23 01:00:34 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:34.947397 | orchestrator | 2026-03-23 01:00:34 | INFO  | Task 5c71a7f5-9f78-4c24-80f5-34b3dbd14684 is in state SUCCESS 2026-03-23 01:00:34.947613 | orchestrator | 2026-03-23 01:00:34.947628 | orchestrator | 2026-03-23 01:00:34.947635 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:00:34.947639 | orchestrator | 2026-03-23 01:00:34.947643 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:00:34.947648 | orchestrator | Monday 23 March 2026 00:59:51 +0000 (0:00:00.214) 0:00:00.214 ********** 2026-03-23 01:00:34.947652 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:00:34.947656 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:00:34.947660 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:00:34.947664 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:00:34.947668 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:00:34.947672 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:00:34.947676 | orchestrator | ok: [testbed-manager] 2026-03-23 01:00:34.947691 | orchestrator | 2026-03-23 01:00:34.947696 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:00:34.947699 | orchestrator | Monday 23 March 2026 00:59:51 +0000 (0:00:00.663) 0:00:00.877 ********** 2026-03-23 01:00:34.947704 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-23 01:00:34.947708 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-23 01:00:34.947712 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-23 01:00:34.947728 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-23 01:00:34.947732 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-23 01:00:34.947736 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-23 01:00:34.947740 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-23 01:00:34.947744 | orchestrator | 2026-03-23 01:00:34.947748 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-23 01:00:34.947751 | orchestrator | 2026-03-23 01:00:34.947755 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-23 01:00:34.947759 | orchestrator | Monday 23 March 2026 00:59:52 +0000 (0:00:00.755) 0:00:01.633 ********** 2026-03-23 01:00:34.947763 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-23 01:00:34.947768 | orchestrator | 2026-03-23 01:00:34.947772 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-23 01:00:34.947776 | orchestrator | Monday 23 March 2026 00:59:54 +0000 (0:00:02.028) 0:00:03.661 ********** 2026-03-23 01:00:34.947779 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-03-23 01:00:34.947783 | orchestrator | 2026-03-23 01:00:34.947787 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-23 01:00:34.947791 | orchestrator | Monday 23 March 2026 00:59:58 +0000 (0:00:03.735) 0:00:07.396 ********** 2026-03-23 01:00:34.947795 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-23 01:00:34.947800 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-23 01:00:34.947803 | orchestrator | 2026-03-23 01:00:34.947808 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-23 01:00:34.947811 | orchestrator | Monday 23 March 2026 01:00:04 +0000 (0:00:05.893) 0:00:13.290 ********** 2026-03-23 01:00:34.947822 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-23 01:00:34.947826 | orchestrator | 2026-03-23 01:00:34.947830 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-23 01:00:34.947833 | orchestrator | Monday 23 March 2026 01:00:07 +0000 (0:00:03.610) 0:00:16.901 ********** 2026-03-23 01:00:34.947837 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-03-23 01:00:34.947841 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-23 01:00:34.947845 | orchestrator | 2026-03-23 01:00:34.947848 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-23 01:00:34.947852 | orchestrator | Monday 23 March 2026 01:00:11 +0000 (0:00:03.569) 0:00:20.470 ********** 2026-03-23 01:00:34.947857 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-23 01:00:34.947863 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-03-23 01:00:34.947915 | orchestrator | 2026-03-23 01:00:34.947920 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-23 01:00:34.947923 | orchestrator | Monday 23 March 2026 01:00:16 +0000 (0:00:05.684) 0:00:26.154 ********** 2026-03-23 01:00:34.947927 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-03-23 01:00:34.947931 | orchestrator | 2026-03-23 01:00:34.947935 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:00:34.947939 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:00:34.947970 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:00:34.947976 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:00:34.947984 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:00:34.947988 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:00:34.947999 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:00:34.948003 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:00:34.948006 | orchestrator | 2026-03-23 01:00:34.948010 | orchestrator | 2026-03-23 01:00:34.948014 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:00:34.948018 | orchestrator | Monday 23 March 2026 01:00:21 +0000 (0:00:04.558) 0:00:30.712 ********** 2026-03-23 01:00:34.948022 | orchestrator | =============================================================================== 2026-03-23 01:00:34.948025 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.89s 2026-03-23 01:00:34.948029 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.69s 2026-03-23 01:00:34.948033 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.55s 2026-03-23 01:00:34.948037 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.74s 2026-03-23 01:00:34.948040 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.61s 2026-03-23 01:00:34.948044 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.57s 2026-03-23 01:00:34.948048 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.03s 2026-03-23 01:00:34.948051 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.76s 2026-03-23 01:00:34.948055 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.66s 2026-03-23 01:00:34.948059 | orchestrator | 2026-03-23 01:00:34.948063 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-23 01:00:34.948067 | orchestrator | 2.16.14 2026-03-23 01:00:34.948070 | orchestrator | 2026-03-23 01:00:34.948074 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-23 01:00:34.948078 | orchestrator | 2026-03-23 01:00:34.948082 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-23 01:00:34.948085 | orchestrator | Monday 23 March 2026 00:59:45 +0000 (0:00:00.233) 0:00:00.233 ********** 2026-03-23 01:00:34.948089 | orchestrator | changed: [testbed-manager] 2026-03-23 01:00:34.948093 | orchestrator | 2026-03-23 01:00:34.948097 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-23 01:00:34.948100 | orchestrator | Monday 23 March 2026 00:59:47 +0000 (0:00:02.163) 0:00:02.397 ********** 2026-03-23 01:00:34.948104 | orchestrator | changed: [testbed-manager] 2026-03-23 01:00:34.948108 | orchestrator | 2026-03-23 01:00:34.948112 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-23 01:00:34.948116 | orchestrator | Monday 23 March 2026 00:59:48 +0000 (0:00:00.875) 0:00:03.272 ********** 2026-03-23 01:00:34.948119 | orchestrator | changed: [testbed-manager] 2026-03-23 01:00:34.948123 | orchestrator | 2026-03-23 01:00:34.948127 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-23 01:00:34.948130 | orchestrator | Monday 23 March 2026 00:59:49 +0000 (0:00:01.386) 0:00:04.659 ********** 2026-03-23 01:00:34.948134 | orchestrator | changed: [testbed-manager] 2026-03-23 01:00:34.948138 | orchestrator | 2026-03-23 01:00:34.948142 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-23 01:00:34.948145 | orchestrator | Monday 23 March 2026 00:59:50 +0000 (0:00:01.256) 0:00:05.915 ********** 2026-03-23 01:00:34.948149 | orchestrator | changed: [testbed-manager] 2026-03-23 01:00:34.948153 | orchestrator | 2026-03-23 01:00:34.948159 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-23 01:00:34.948166 | orchestrator | Monday 23 March 2026 00:59:51 +0000 (0:00:00.848) 0:00:06.764 ********** 2026-03-23 01:00:34.948170 | orchestrator | changed: [testbed-manager] 2026-03-23 01:00:34.948173 | orchestrator | 2026-03-23 01:00:34.948177 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-23 01:00:34.948181 | orchestrator | Monday 23 March 2026 00:59:52 +0000 (0:00:00.935) 0:00:07.699 ********** 2026-03-23 01:00:34.948185 | orchestrator | changed: [testbed-manager] 2026-03-23 01:00:34.948188 | orchestrator | 2026-03-23 01:00:34.948192 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-23 01:00:34.948196 | orchestrator | Monday 23 March 2026 00:59:54 +0000 (0:00:01.733) 0:00:09.432 ********** 2026-03-23 01:00:34.948199 | orchestrator | changed: [testbed-manager] 2026-03-23 01:00:34.948203 | orchestrator | 2026-03-23 01:00:34.948207 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-23 01:00:34.948211 | orchestrator | Monday 23 March 2026 00:59:55 +0000 (0:00:01.080) 0:00:10.512 ********** 2026-03-23 01:00:34.948214 | orchestrator | changed: [testbed-manager] 2026-03-23 01:00:34.948218 | orchestrator | 2026-03-23 01:00:34.948222 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-23 01:00:34.948225 | orchestrator | Monday 23 March 2026 01:00:07 +0000 (0:00:11.546) 0:00:22.059 ********** 2026-03-23 01:00:34.948229 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:00:34.948233 | orchestrator | 2026-03-23 01:00:34.948239 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-23 01:00:34.948247 | orchestrator | 2026-03-23 01:00:34.948255 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-23 01:00:34.948261 | orchestrator | Monday 23 March 2026 01:00:07 +0000 (0:00:00.153) 0:00:22.212 ********** 2026-03-23 01:00:34.948267 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:00:34.948273 | orchestrator | 2026-03-23 01:00:34.948279 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-23 01:00:34.948285 | orchestrator | 2026-03-23 01:00:34.948291 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-23 01:00:34.948297 | orchestrator | Monday 23 March 2026 01:00:08 +0000 (0:00:01.683) 0:00:23.895 ********** 2026-03-23 01:00:34.948303 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:00:34.948308 | orchestrator | 2026-03-23 01:00:34.948315 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-23 01:00:34.948320 | orchestrator | 2026-03-23 01:00:34.948326 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-23 01:00:34.948336 | orchestrator | Monday 23 March 2026 01:00:20 +0000 (0:00:11.608) 0:00:35.504 ********** 2026-03-23 01:00:34.948343 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:00:34.948349 | orchestrator | 2026-03-23 01:00:34.948356 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:00:34.948362 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-23 01:00:34.948369 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:00:34.948375 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:00:34.948381 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:00:34.948387 | orchestrator | 2026-03-23 01:00:34.948391 | orchestrator | 2026-03-23 01:00:34.948395 | orchestrator | 2026-03-23 01:00:34.948398 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:00:34.948402 | orchestrator | Monday 23 March 2026 01:00:32 +0000 (0:00:11.606) 0:00:47.110 ********** 2026-03-23 01:00:34.948406 | orchestrator | =============================================================================== 2026-03-23 01:00:34.948414 | orchestrator | Restart ceph manager service ------------------------------------------- 24.90s 2026-03-23 01:00:34.948418 | orchestrator | Create admin user ------------------------------------------------------ 11.55s 2026-03-23 01:00:34.948421 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.16s 2026-03-23 01:00:34.948425 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.73s 2026-03-23 01:00:34.948429 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.39s 2026-03-23 01:00:34.948432 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.26s 2026-03-23 01:00:34.948436 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.08s 2026-03-23 01:00:34.948440 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.94s 2026-03-23 01:00:34.948444 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.88s 2026-03-23 01:00:34.948447 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.85s 2026-03-23 01:00:34.948451 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2026-03-23 01:00:34.948455 | orchestrator | 2026-03-23 01:00:34 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:34.948459 | orchestrator | 2026-03-23 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:37.975190 | orchestrator | 2026-03-23 01:00:37 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:00:37.977589 | orchestrator | 2026-03-23 01:00:37 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:37.980511 | orchestrator | 2026-03-23 01:00:37 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:37.981184 | orchestrator | 2026-03-23 01:00:37 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:37.981254 | orchestrator | 2026-03-23 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:41.072119 | orchestrator | 2026-03-23 01:00:41 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:00:41.072221 | orchestrator | 2026-03-23 01:00:41 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:41.072768 | orchestrator | 2026-03-23 01:00:41 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:41.073105 | orchestrator | 2026-03-23 01:00:41 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:41.073134 | orchestrator | 2026-03-23 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:44.093075 | orchestrator | 2026-03-23 01:00:44 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:00:44.093578 | orchestrator | 2026-03-23 01:00:44 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:44.094347 | orchestrator | 2026-03-23 01:00:44 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:44.094800 | orchestrator | 2026-03-23 01:00:44 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:44.094824 | orchestrator | 2026-03-23 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:47.160444 | orchestrator | 2026-03-23 01:00:47 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:00:47.161154 | orchestrator | 2026-03-23 01:00:47 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:47.162441 | orchestrator | 2026-03-23 01:00:47 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:47.162493 | orchestrator | 2026-03-23 01:00:47 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:47.162502 | orchestrator | 2026-03-23 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:50.185628 | orchestrator | 2026-03-23 01:00:50 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:00:50.186921 | orchestrator | 2026-03-23 01:00:50 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:50.187140 | orchestrator | 2026-03-23 01:00:50 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:50.187164 | orchestrator | 2026-03-23 01:00:50 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:50.187176 | orchestrator | 2026-03-23 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:53.214841 | orchestrator | 2026-03-23 01:00:53 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:00:53.215183 | orchestrator | 2026-03-23 01:00:53 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:53.218884 | orchestrator | 2026-03-23 01:00:53 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:53.218955 | orchestrator | 2026-03-23 01:00:53 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:53.218964 | orchestrator | 2026-03-23 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:56.241772 | orchestrator | 2026-03-23 01:00:56 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:00:56.242251 | orchestrator | 2026-03-23 01:00:56 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:56.243411 | orchestrator | 2026-03-23 01:00:56 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:56.244496 | orchestrator | 2026-03-23 01:00:56 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:56.244526 | orchestrator | 2026-03-23 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:00:59.269246 | orchestrator | 2026-03-23 01:00:59 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:00:59.269342 | orchestrator | 2026-03-23 01:00:59 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:00:59.270179 | orchestrator | 2026-03-23 01:00:59 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:00:59.270902 | orchestrator | 2026-03-23 01:00:59 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:00:59.270953 | orchestrator | 2026-03-23 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:02.305394 | orchestrator | 2026-03-23 01:01:02 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:02.305997 | orchestrator | 2026-03-23 01:01:02 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:02.307575 | orchestrator | 2026-03-23 01:01:02 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:02.308084 | orchestrator | 2026-03-23 01:01:02 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:02.308108 | orchestrator | 2026-03-23 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:05.332783 | orchestrator | 2026-03-23 01:01:05 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:05.333541 | orchestrator | 2026-03-23 01:01:05 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:05.334303 | orchestrator | 2026-03-23 01:01:05 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:05.335457 | orchestrator | 2026-03-23 01:01:05 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:05.335502 | orchestrator | 2026-03-23 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:08.360371 | orchestrator | 2026-03-23 01:01:08 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:08.360872 | orchestrator | 2026-03-23 01:01:08 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:08.361461 | orchestrator | 2026-03-23 01:01:08 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:08.362106 | orchestrator | 2026-03-23 01:01:08 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:08.362137 | orchestrator | 2026-03-23 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:11.385849 | orchestrator | 2026-03-23 01:01:11 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:11.385929 | orchestrator | 2026-03-23 01:01:11 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:11.386409 | orchestrator | 2026-03-23 01:01:11 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:11.387029 | orchestrator | 2026-03-23 01:01:11 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:11.387063 | orchestrator | 2026-03-23 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:14.409153 | orchestrator | 2026-03-23 01:01:14 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:14.409501 | orchestrator | 2026-03-23 01:01:14 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:14.410410 | orchestrator | 2026-03-23 01:01:14 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:14.411062 | orchestrator | 2026-03-23 01:01:14 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:14.411106 | orchestrator | 2026-03-23 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:17.474276 | orchestrator | 2026-03-23 01:01:17 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:17.474342 | orchestrator | 2026-03-23 01:01:17 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:17.474353 | orchestrator | 2026-03-23 01:01:17 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:17.474357 | orchestrator | 2026-03-23 01:01:17 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:17.474361 | orchestrator | 2026-03-23 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:20.483932 | orchestrator | 2026-03-23 01:01:20 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:20.484561 | orchestrator | 2026-03-23 01:01:20 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:20.485978 | orchestrator | 2026-03-23 01:01:20 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:20.486910 | orchestrator | 2026-03-23 01:01:20 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:20.486940 | orchestrator | 2026-03-23 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:23.516458 | orchestrator | 2026-03-23 01:01:23 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:23.517558 | orchestrator | 2026-03-23 01:01:23 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:23.519327 | orchestrator | 2026-03-23 01:01:23 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:23.521835 | orchestrator | 2026-03-23 01:01:23 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:23.522814 | orchestrator | 2026-03-23 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:26.550115 | orchestrator | 2026-03-23 01:01:26 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:26.550685 | orchestrator | 2026-03-23 01:01:26 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:26.551944 | orchestrator | 2026-03-23 01:01:26 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:26.552910 | orchestrator | 2026-03-23 01:01:26 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:26.552936 | orchestrator | 2026-03-23 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:29.593444 | orchestrator | 2026-03-23 01:01:29 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:29.595075 | orchestrator | 2026-03-23 01:01:29 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:29.597450 | orchestrator | 2026-03-23 01:01:29 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:29.599690 | orchestrator | 2026-03-23 01:01:29 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:29.600193 | orchestrator | 2026-03-23 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:32.644922 | orchestrator | 2026-03-23 01:01:32 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:32.645953 | orchestrator | 2026-03-23 01:01:32 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:32.647127 | orchestrator | 2026-03-23 01:01:32 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:32.647900 | orchestrator | 2026-03-23 01:01:32 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:32.647924 | orchestrator | 2026-03-23 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:35.677441 | orchestrator | 2026-03-23 01:01:35 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:35.679721 | orchestrator | 2026-03-23 01:01:35 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:35.681306 | orchestrator | 2026-03-23 01:01:35 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:35.683405 | orchestrator | 2026-03-23 01:01:35 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:35.683449 | orchestrator | 2026-03-23 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:38.734471 | orchestrator | 2026-03-23 01:01:38 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:38.736097 | orchestrator | 2026-03-23 01:01:38 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:38.740679 | orchestrator | 2026-03-23 01:01:38 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:38.744032 | orchestrator | 2026-03-23 01:01:38 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:38.744081 | orchestrator | 2026-03-23 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:41.790698 | orchestrator | 2026-03-23 01:01:41 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:41.792371 | orchestrator | 2026-03-23 01:01:41 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:41.794747 | orchestrator | 2026-03-23 01:01:41 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:41.796153 | orchestrator | 2026-03-23 01:01:41 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:41.796217 | orchestrator | 2026-03-23 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:44.856396 | orchestrator | 2026-03-23 01:01:44 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:44.858399 | orchestrator | 2026-03-23 01:01:44 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:44.860106 | orchestrator | 2026-03-23 01:01:44 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:44.862316 | orchestrator | 2026-03-23 01:01:44 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:44.862363 | orchestrator | 2026-03-23 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:47.899470 | orchestrator | 2026-03-23 01:01:47 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:47.900071 | orchestrator | 2026-03-23 01:01:47 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:47.900823 | orchestrator | 2026-03-23 01:01:47 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:47.901678 | orchestrator | 2026-03-23 01:01:47 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:47.901697 | orchestrator | 2026-03-23 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:50.927808 | orchestrator | 2026-03-23 01:01:50 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:50.928295 | orchestrator | 2026-03-23 01:01:50 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:50.930467 | orchestrator | 2026-03-23 01:01:50 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:50.931264 | orchestrator | 2026-03-23 01:01:50 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:50.931311 | orchestrator | 2026-03-23 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:53.960988 | orchestrator | 2026-03-23 01:01:53 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:53.961289 | orchestrator | 2026-03-23 01:01:53 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:53.962996 | orchestrator | 2026-03-23 01:01:53 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:53.963051 | orchestrator | 2026-03-23 01:01:53 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:53.963057 | orchestrator | 2026-03-23 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:01:56.993980 | orchestrator | 2026-03-23 01:01:56 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:01:56.998970 | orchestrator | 2026-03-23 01:01:56 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:01:57.001699 | orchestrator | 2026-03-23 01:01:57 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:01:57.003024 | orchestrator | 2026-03-23 01:01:57 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:01:57.003069 | orchestrator | 2026-03-23 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:00.040336 | orchestrator | 2026-03-23 01:02:00 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:00.040397 | orchestrator | 2026-03-23 01:02:00 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:02:00.043667 | orchestrator | 2026-03-23 01:02:00 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:02:00.045195 | orchestrator | 2026-03-23 01:02:00 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:00.045254 | orchestrator | 2026-03-23 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:03.083574 | orchestrator | 2026-03-23 01:02:03 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:03.086142 | orchestrator | 2026-03-23 01:02:03 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:02:03.088526 | orchestrator | 2026-03-23 01:02:03 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:02:03.090310 | orchestrator | 2026-03-23 01:02:03 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:03.090362 | orchestrator | 2026-03-23 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:06.124922 | orchestrator | 2026-03-23 01:02:06 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:06.126962 | orchestrator | 2026-03-23 01:02:06 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:02:06.129568 | orchestrator | 2026-03-23 01:02:06 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:02:06.133298 | orchestrator | 2026-03-23 01:02:06 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:06.133651 | orchestrator | 2026-03-23 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:09.206093 | orchestrator | 2026-03-23 01:02:09 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:09.206149 | orchestrator | 2026-03-23 01:02:09 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:02:09.206155 | orchestrator | 2026-03-23 01:02:09 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:02:09.206160 | orchestrator | 2026-03-23 01:02:09 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:09.206165 | orchestrator | 2026-03-23 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:12.198232 | orchestrator | 2026-03-23 01:02:12 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:12.201270 | orchestrator | 2026-03-23 01:02:12 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:02:12.201874 | orchestrator | 2026-03-23 01:02:12 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:02:12.202715 | orchestrator | 2026-03-23 01:02:12 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:12.202741 | orchestrator | 2026-03-23 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:15.240279 | orchestrator | 2026-03-23 01:02:15 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:15.240715 | orchestrator | 2026-03-23 01:02:15 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:02:15.244701 | orchestrator | 2026-03-23 01:02:15 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:02:15.245166 | orchestrator | 2026-03-23 01:02:15 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:15.245375 | orchestrator | 2026-03-23 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:18.288324 | orchestrator | 2026-03-23 01:02:18 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:18.288880 | orchestrator | 2026-03-23 01:02:18 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:02:18.289591 | orchestrator | 2026-03-23 01:02:18 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:02:18.290315 | orchestrator | 2026-03-23 01:02:18 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:18.290350 | orchestrator | 2026-03-23 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:21.326961 | orchestrator | 2026-03-23 01:02:21 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:21.327021 | orchestrator | 2026-03-23 01:02:21 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:02:21.327030 | orchestrator | 2026-03-23 01:02:21 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:02:21.327038 | orchestrator | 2026-03-23 01:02:21 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:21.327044 | orchestrator | 2026-03-23 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:24.357316 | orchestrator | 2026-03-23 01:02:24 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:24.357714 | orchestrator | 2026-03-23 01:02:24 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:02:24.358419 | orchestrator | 2026-03-23 01:02:24 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:02:24.359343 | orchestrator | 2026-03-23 01:02:24 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:24.359383 | orchestrator | 2026-03-23 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:27.385578 | orchestrator | 2026-03-23 01:02:27 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:27.388297 | orchestrator | 2026-03-23 01:02:27 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:02:27.390142 | orchestrator | 2026-03-23 01:02:27 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state STARTED 2026-03-23 01:02:27.391904 | orchestrator | 2026-03-23 01:02:27 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:27.392144 | orchestrator | 2026-03-23 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:30.433950 | orchestrator | 2026-03-23 01:02:30 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:30.440945 | orchestrator | 2026-03-23 01:02:30.441021 | orchestrator | 2026-03-23 01:02:30 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:02:30.441032 | orchestrator | 2026-03-23 01:02:30 | INFO  | Task 70e92712-8562-4fa9-b41a-df3e63b909ea is in state SUCCESS 2026-03-23 01:02:30.441975 | orchestrator | 2026-03-23 01:02:30.442053 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:02:30.442065 | orchestrator | 2026-03-23 01:02:30.442073 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:02:30.442080 | orchestrator | Monday 23 March 2026 00:59:44 +0000 (0:00:00.335) 0:00:00.335 ********** 2026-03-23 01:02:30.442087 | orchestrator | ok: [testbed-manager] 2026-03-23 01:02:30.442095 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:02:30.442102 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:02:30.442109 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:02:30.442116 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:02:30.442123 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:02:30.442129 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:02:30.442136 | orchestrator | 2026-03-23 01:02:30.442144 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:02:30.442152 | orchestrator | Monday 23 March 2026 00:59:45 +0000 (0:00:00.725) 0:00:01.060 ********** 2026-03-23 01:02:30.442225 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-23 01:02:30.442233 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-23 01:02:30.442241 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-23 01:02:30.442248 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-23 01:02:30.442255 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-23 01:02:30.442263 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-23 01:02:30.442270 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-23 01:02:30.442277 | orchestrator | 2026-03-23 01:02:30.442287 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-23 01:02:30.442296 | orchestrator | 2026-03-23 01:02:30.442303 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-23 01:02:30.442310 | orchestrator | Monday 23 March 2026 00:59:45 +0000 (0:00:00.782) 0:00:01.842 ********** 2026-03-23 01:02:30.442317 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 01:02:30.442326 | orchestrator | 2026-03-23 01:02:30.442333 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-23 01:02:30.442340 | orchestrator | Monday 23 March 2026 00:59:46 +0000 (0:00:01.109) 0:00:02.952 ********** 2026-03-23 01:02:30.442349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.442359 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-23 01:02:30.443045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443111 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.443135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.443143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.443150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443158 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.443190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443205 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-23 01:02:30.443216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443223 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.443310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443325 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443340 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.443386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443408 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443422 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443545 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443552 | orchestrator | 2026-03-23 01:02:30.443559 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-23 01:02:30.443567 | orchestrator | Monday 23 March 2026 00:59:51 +0000 (0:00:04.285) 0:00:07.237 ********** 2026-03-23 01:02:30.443574 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 01:02:30.443581 | orchestrator | 2026-03-23 01:02:30.443587 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-23 01:02:30.443593 | orchestrator | Monday 23 March 2026 00:59:52 +0000 (0:00:01.392) 0:00:08.629 ********** 2026-03-23 01:02:30.443600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.443607 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-23 01:02:30.443623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.443630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.443642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.443649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.443656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.443663 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.443688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443728 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443743 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443761 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443768 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443792 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-23 01:02:30.443799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443831 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.443849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.443873 | orchestrator | 2026-03-23 01:02:30.443880 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-23 01:02:30.443888 | orchestrator | Monday 23 March 2026 00:59:58 +0000 (0:00:05.455) 0:00:14.084 ********** 2026-03-23 01:02:30.443895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.443906 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-23 01:02:30.443915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.443923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.443930 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.443966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.443975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.443982 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.443993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.444000 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:30.444008 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-23 01:02:30.444027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444035 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444042 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:02:30.444054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444078 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:30.444085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.444092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444124 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:30.444134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.444140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444157 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:02:30.444164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.444171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444184 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:02:30.444194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.444201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444250 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:02:30.444257 | orchestrator | 2026-03-23 01:02:30.444264 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-23 01:02:30.444271 | orchestrator | Monday 23 March 2026 00:59:59 +0000 (0:00:01.272) 0:00:15.357 ********** 2026-03-23 01:02:30.444279 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-23 01:02:30.444286 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.444293 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444304 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-23 01:02:30.444312 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.444343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444373 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:02:30.444381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.444391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.444438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-23 01:02:30.444470 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:30.444483 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:30.444490 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:30.444503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.444523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444539 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:02:30.444547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.444554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444570 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:02:30.444579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-23 01:02:30.444592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-23 01:02:30.444614 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:02:30.444620 | orchestrator | 2026-03-23 01:02:30.444627 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-23 01:02:30.444635 | orchestrator | Monday 23 March 2026 01:00:01 +0000 (0:00:01.738) 0:00:17.095 ********** 2026-03-23 01:02:30.444642 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-23 01:02:30.444649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.444656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.444663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.444673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.444684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.445089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.445109 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.445117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.445124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.445131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.445139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.445151 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.445163 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.445188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.445196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.445203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.445210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.445217 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-23 01:02:30.445233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.445239 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.445307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.445317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.445324 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.445330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.445337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.445345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.445359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.445366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.445373 | orchestrator | 2026-03-23 01:02:30.445379 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-23 01:02:30.445386 | orchestrator | Monday 23 March 2026 01:00:06 +0000 (0:00:05.757) 0:00:22.852 ********** 2026-03-23 01:02:30.445393 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:02:30.445399 | orchestrator | 2026-03-23 01:02:30.445405 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-23 01:02:30.445428 | orchestrator | Monday 23 March 2026 01:00:07 +0000 (0:00:00.846) 0:00:23.699 ********** 2026-03-23 01:02:30.445435 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313764, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445443 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313764, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445450 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1313783, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.715195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445457 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313764, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445467 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313764, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445477 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1313783, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.715195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445500 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1313747, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445523 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313764, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445530 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313764, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.445538 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1313783, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.715195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445551 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1313747, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445559 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313775, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7133613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445569 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1313783, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.715195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445596 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1313747, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445603 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313764, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445610 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313775, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7133613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445617 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313745, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7038043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445627 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1313783, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.715195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445634 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313775, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7133613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445643 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1313747, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445665 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1313747, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445673 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1313783, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.715195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445680 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313775, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7133613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445687 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313775, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7133613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445697 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313745, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7038043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445704 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313765, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445714 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313745, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7038043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445722 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1313783, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.715195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.445745 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313745, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7038043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445754 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313765, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445765 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1313747, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445772 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313765, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445779 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313745, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7038043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445789 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1313772, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7123034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445796 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1313772, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7123034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445820 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313765, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445828 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313775, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7133613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445843 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313765, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445850 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1313747, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.445858 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1313772, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7123034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445867 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313767, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.710968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445874 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313745, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7038043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445898 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1313772, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7123034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445905 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1313772, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7123034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445916 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313767, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.710968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445923 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313767, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.710968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445930 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313767, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.710968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445939 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313765, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445946 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313767, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.710968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445968 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313760, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445976 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313760, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445986 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313760, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445992 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313780, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7151315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.445999 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313780, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7151315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446008 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313760, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446054 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1313772, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7123034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446081 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313760, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446092 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313780, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7151315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446100 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313743, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7021947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446112 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313780, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7151315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446124 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313792, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.721195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446144 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313743, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7021947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446158 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313743, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7021947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446200 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313743, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7021947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446222 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313779, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.714195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446233 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313775, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7133613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446245 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313767, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.710968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446258 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313780, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7151315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446274 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313760, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446286 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313792, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.721195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446334 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313792, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.721195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446354 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313746, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7041514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446366 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313792, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.721195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446378 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313780, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7151315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446390 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313743, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7021947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446409 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1313744, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7031946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446421 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313779, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.714195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446450 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313779, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.714195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446462 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313746, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7041514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446474 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313779, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.714195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446487 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313746, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7041514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446499 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313743, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7021947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446541 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313771, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7120624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446555 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1313744, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7031946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446585 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313792, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.721195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446597 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313792, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.721195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446610 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1313744, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7031946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446620 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313770, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7114089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446627 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313746, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7041514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446636 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313771, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7120624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446643 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1313744, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7031946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446658 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313779, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.714195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446666 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313770, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7114089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446674 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313779, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.714195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446681 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313745, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7038043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446687 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313771, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7120624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446696 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313791, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.720195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446703 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:30.446710 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313770, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7114089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446725 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313746, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7041514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446732 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313771, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7120624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446739 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313791, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.720195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446746 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:02:30.446752 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313791, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.720195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446759 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:30.446765 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313746, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7041514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446774 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1313744, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7031946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446785 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313765, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7101948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446795 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313770, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7114089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446802 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1313744, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7031946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446809 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313791, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.720195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446816 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:02:30.446823 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313771, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7120624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446829 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313771, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7120624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446838 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313770, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7114089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446851 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1313772, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7123034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446862 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313770, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7114089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446869 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313791, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.720195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446875 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:30.446882 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313791, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.720195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-23 01:02:30.446889 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:02:30.446896 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313767, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.710968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446902 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313760, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7081947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446915 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313780, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7151315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446921 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313743, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7021947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446932 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313792, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.721195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446939 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313779, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.714195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446945 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313746, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7041514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446952 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1313744, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7031946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446958 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313771, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7120624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446971 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313770, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.7114089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446978 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313791, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.720195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-23 01:02:30.446984 | orchestrator | 2026-03-23 01:02:30.446991 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-23 01:02:30.446998 | orchestrator | Monday 23 March 2026 01:00:31 +0000 (0:00:23.769) 0:00:47.469 ********** 2026-03-23 01:02:30.447004 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:02:30.447011 | orchestrator | 2026-03-23 01:02:30.447020 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-23 01:02:30.447027 | orchestrator | Monday 23 March 2026 01:00:32 +0000 (0:00:01.281) 0:00:48.750 ********** 2026-03-23 01:02:30.447034 | orchestrator | [WARNING]: Skipped 2026-03-23 01:02:30.447040 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447048 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-23 01:02:30.447054 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447060 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-23 01:02:30.447067 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:02:30.447073 | orchestrator | [WARNING]: Skipped 2026-03-23 01:02:30.447079 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447085 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-23 01:02:30.447092 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447098 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-23 01:02:30.447105 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 01:02:30.447111 | orchestrator | [WARNING]: Skipped 2026-03-23 01:02:30.447117 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447124 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-23 01:02:30.447130 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447135 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-23 01:02:30.447141 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-23 01:02:30.447147 | orchestrator | [WARNING]: Skipped 2026-03-23 01:02:30.447154 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447160 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-23 01:02:30.447166 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447177 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-23 01:02:30.447184 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-23 01:02:30.447191 | orchestrator | [WARNING]: Skipped 2026-03-23 01:02:30.447198 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447204 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-23 01:02:30.447211 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447217 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-23 01:02:30.447223 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-23 01:02:30.447230 | orchestrator | [WARNING]: Skipped 2026-03-23 01:02:30.447236 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447243 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-23 01:02:30.447249 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447256 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-23 01:02:30.447262 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-23 01:02:30.447269 | orchestrator | [WARNING]: Skipped 2026-03-23 01:02:30.447276 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447282 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-23 01:02:30.447288 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-23 01:02:30.447294 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-23 01:02:30.447300 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-23 01:02:30.447307 | orchestrator | 2026-03-23 01:02:30.447313 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-23 01:02:30.447319 | orchestrator | Monday 23 March 2026 01:00:35 +0000 (0:00:02.505) 0:00:51.255 ********** 2026-03-23 01:02:30.447325 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-23 01:02:30.447333 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:30.447346 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-23 01:02:30.447360 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:30.447369 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-23 01:02:30.447375 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:30.447381 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-23 01:02:30.447387 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:02:30.447393 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-23 01:02:30.447399 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:02:30.447410 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-23 01:02:30.447423 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:02:30.447434 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-23 01:02:30.447447 | orchestrator | 2026-03-23 01:02:30.447459 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-23 01:02:30.447470 | orchestrator | Monday 23 March 2026 01:00:50 +0000 (0:00:15.085) 0:01:06.341 ********** 2026-03-23 01:02:30.447482 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-23 01:02:30.447502 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:02:30.447526 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-23 01:02:30.447533 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:30.447539 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-23 01:02:30.447560 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:30.447571 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-23 01:02:30.447582 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:30.447593 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-23 01:02:30.447604 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:02:30.447616 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-23 01:02:30.447628 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:02:30.447640 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-23 01:02:30.447651 | orchestrator | 2026-03-23 01:02:30.447662 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-23 01:02:30.447675 | orchestrator | Monday 23 March 2026 01:00:53 +0000 (0:00:03.303) 0:01:09.644 ********** 2026-03-23 01:02:30.447683 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-23 01:02:30.447691 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:30.447697 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-23 01:02:30.447704 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:30.447710 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-23 01:02:30.447716 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:30.447722 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-23 01:02:30.447728 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:02:30.447734 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-23 01:02:30.447740 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:02:30.447746 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-23 01:02:30.447752 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:02:30.447758 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-23 01:02:30.447764 | orchestrator | 2026-03-23 01:02:30.447770 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-23 01:02:30.447776 | orchestrator | Monday 23 March 2026 01:00:55 +0000 (0:00:02.001) 0:01:11.646 ********** 2026-03-23 01:02:30.447782 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:02:30.447788 | orchestrator | 2026-03-23 01:02:30.447794 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-23 01:02:30.447800 | orchestrator | Monday 23 March 2026 01:00:56 +0000 (0:00:00.572) 0:01:12.219 ********** 2026-03-23 01:02:30.447806 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:02:30.447813 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:30.447818 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:30.447824 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:30.447830 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:02:30.447835 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:02:30.447842 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:02:30.447848 | orchestrator | 2026-03-23 01:02:30.447853 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-23 01:02:30.447860 | orchestrator | Monday 23 March 2026 01:00:56 +0000 (0:00:00.748) 0:01:12.967 ********** 2026-03-23 01:02:30.447866 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:02:30.447894 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:02:30.447911 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:02:30.447917 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:02:30.447923 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:30.447930 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:30.447936 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:30.447942 | orchestrator | 2026-03-23 01:02:30.447947 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-23 01:02:30.447953 | orchestrator | Monday 23 March 2026 01:00:59 +0000 (0:00:02.829) 0:01:15.796 ********** 2026-03-23 01:02:30.447959 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-23 01:02:30.447966 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:30.447971 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-23 01:02:30.447977 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:30.447983 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-23 01:02:30.447989 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:02:30.447995 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-23 01:02:30.448001 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:02:30.448013 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-23 01:02:30.448019 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:02:30.448024 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-23 01:02:30.448029 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:02:30.448035 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-23 01:02:30.448040 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:30.448046 | orchestrator | 2026-03-23 01:02:30.448052 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-23 01:02:30.448058 | orchestrator | Monday 23 March 2026 01:01:01 +0000 (0:00:01.826) 0:01:17.623 ********** 2026-03-23 01:02:30.448064 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-23 01:02:30.448070 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-23 01:02:30.448076 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-23 01:02:30.448082 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:30.448087 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:30.448093 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:30.448099 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-23 01:02:30.448105 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:02:30.448111 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-23 01:02:30.448116 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:02:30.448122 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-23 01:02:30.448128 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-23 01:02:30.448133 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:02:30.448139 | orchestrator | 2026-03-23 01:02:30.448145 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-23 01:02:30.448150 | orchestrator | Monday 23 March 2026 01:01:03 +0000 (0:00:02.235) 0:01:19.859 ********** 2026-03-23 01:02:30.448156 | orchestrator | [WARNING]: Skipped 2026-03-23 01:02:30.448162 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-23 01:02:30.448174 | orchestrator | due to this access issue: 2026-03-23 01:02:30.448179 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-23 01:02:30.448185 | orchestrator | not a directory 2026-03-23 01:02:30.448191 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:02:30.448197 | orchestrator | 2026-03-23 01:02:30.448203 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-23 01:02:30.448209 | orchestrator | Monday 23 March 2026 01:01:05 +0000 (0:00:02.058) 0:01:21.917 ********** 2026-03-23 01:02:30.448215 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:02:30.448220 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:30.448226 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:30.448232 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:30.448238 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:02:30.448244 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:02:30.448250 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:02:30.448257 | orchestrator | 2026-03-23 01:02:30.448263 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-23 01:02:30.448269 | orchestrator | Monday 23 March 2026 01:01:06 +0000 (0:00:00.677) 0:01:22.594 ********** 2026-03-23 01:02:30.448275 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:02:30.448281 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:30.448288 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:30.448294 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:30.448301 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:02:30.448307 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:02:30.448314 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:02:30.448320 | orchestrator | 2026-03-23 01:02:30.448326 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-23 01:02:30.448332 | orchestrator | Monday 23 March 2026 01:01:07 +0000 (0:00:01.033) 0:01:23.627 ********** 2026-03-23 01:02:30.448342 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-23 01:02:30.448355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.448363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.448369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.448379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.448386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.448392 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.448402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.448409 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-23 01:02:30.448420 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.448427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.448437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.448444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.448450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.448464 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-23 01:02:30.448472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.448483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.448490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.448500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.448689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.448712 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.448720 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.448732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.448739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.448753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.448760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-23 01:02:30.448771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.448778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.448785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-23 01:02:30.448792 | orchestrator | 2026-03-23 01:02:30.448799 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-23 01:02:30.448806 | orchestrator | Monday 23 March 2026 01:01:12 +0000 (0:00:05.235) 0:01:28.863 ********** 2026-03-23 01:02:30.448813 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-23 01:02:30.448820 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:02:30.448826 | orchestrator | 2026-03-23 01:02:30.448832 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-23 01:02:30.448838 | orchestrator | Monday 23 March 2026 01:01:13 +0000 (0:00:00.997) 0:01:29.861 ********** 2026-03-23 01:02:30.448845 | orchestrator | 2026-03-23 01:02:30.448851 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-23 01:02:30.448858 | orchestrator | Monday 23 March 2026 01:01:13 +0000 (0:00:00.054) 0:01:29.915 ********** 2026-03-23 01:02:30.448863 | orchestrator | 2026-03-23 01:02:30.448869 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-23 01:02:30.448875 | orchestrator | Monday 23 March 2026 01:01:13 +0000 (0:00:00.080) 0:01:29.996 ********** 2026-03-23 01:02:30.448882 | orchestrator | 2026-03-23 01:02:30.448892 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-23 01:02:30.448899 | orchestrator | Monday 23 March 2026 01:01:14 +0000 (0:00:00.065) 0:01:30.061 ********** 2026-03-23 01:02:30.448905 | orchestrator | 2026-03-23 01:02:30.448912 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-23 01:02:30.448918 | orchestrator | Monday 23 March 2026 01:01:14 +0000 (0:00:00.073) 0:01:30.135 ********** 2026-03-23 01:02:30.448925 | orchestrator | 2026-03-23 01:02:30.448931 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-23 01:02:30.448937 | orchestrator | Monday 23 March 2026 01:01:14 +0000 (0:00:00.048) 0:01:30.184 ********** 2026-03-23 01:02:30.448947 | orchestrator | 2026-03-23 01:02:30.448953 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-23 01:02:30.448960 | orchestrator | Monday 23 March 2026 01:01:14 +0000 (0:00:00.052) 0:01:30.236 ********** 2026-03-23 01:02:30.448966 | orchestrator | 2026-03-23 01:02:30.448972 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-23 01:02:30.448979 | orchestrator | Monday 23 March 2026 01:01:14 +0000 (0:00:00.067) 0:01:30.304 ********** 2026-03-23 01:02:30.448985 | orchestrator | changed: [testbed-manager] 2026-03-23 01:02:30.448992 | orchestrator | 2026-03-23 01:02:30.448998 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-23 01:02:30.449007 | orchestrator | Monday 23 March 2026 01:01:29 +0000 (0:00:15.244) 0:01:45.549 ********** 2026-03-23 01:02:30.449013 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:30.449020 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:30.449026 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:02:30.449032 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:02:30.449038 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:30.449044 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:02:30.449050 | orchestrator | changed: [testbed-manager] 2026-03-23 01:02:30.449057 | orchestrator | 2026-03-23 01:02:30.449063 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-23 01:02:30.449069 | orchestrator | Monday 23 March 2026 01:01:41 +0000 (0:00:11.955) 0:01:57.504 ********** 2026-03-23 01:02:30.449075 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:30.449081 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:30.449088 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:30.449094 | orchestrator | 2026-03-23 01:02:30.449100 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-23 01:02:30.449106 | orchestrator | Monday 23 March 2026 01:01:46 +0000 (0:00:04.988) 0:02:02.493 ********** 2026-03-23 01:02:30.449113 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:30.449119 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:30.449125 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:30.449131 | orchestrator | 2026-03-23 01:02:30.449138 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-23 01:02:30.449144 | orchestrator | Monday 23 March 2026 01:01:52 +0000 (0:00:06.437) 0:02:08.931 ********** 2026-03-23 01:02:30.449150 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:30.449156 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:02:30.449163 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:02:30.449169 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:30.449175 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:02:30.449182 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:30.449188 | orchestrator | changed: [testbed-manager] 2026-03-23 01:02:30.449194 | orchestrator | 2026-03-23 01:02:30.449199 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-23 01:02:30.449205 | orchestrator | Monday 23 March 2026 01:02:00 +0000 (0:00:07.719) 0:02:16.650 ********** 2026-03-23 01:02:30.449211 | orchestrator | changed: [testbed-manager] 2026-03-23 01:02:30.449218 | orchestrator | 2026-03-23 01:02:30.449224 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-23 01:02:30.449230 | orchestrator | Monday 23 March 2026 01:02:12 +0000 (0:00:12.171) 0:02:28.821 ********** 2026-03-23 01:02:30.449235 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:30.449241 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:30.449246 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:30.449252 | orchestrator | 2026-03-23 01:02:30.449258 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-23 01:02:30.449264 | orchestrator | Monday 23 March 2026 01:02:18 +0000 (0:00:05.498) 0:02:34.320 ********** 2026-03-23 01:02:30.449270 | orchestrator | changed: [testbed-manager] 2026-03-23 01:02:30.449276 | orchestrator | 2026-03-23 01:02:30.449282 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-23 01:02:30.449292 | orchestrator | Monday 23 March 2026 01:02:22 +0000 (0:00:04.320) 0:02:38.641 ********** 2026-03-23 01:02:30.449299 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:02:30.449306 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:02:30.449313 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:02:30.449319 | orchestrator | 2026-03-23 01:02:30.449326 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:02:30.449333 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-23 01:02:30.449341 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-23 01:02:30.449348 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-23 01:02:30.449355 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-23 01:02:30.449362 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-23 01:02:30.449372 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-23 01:02:30.449379 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-23 01:02:30.449386 | orchestrator | 2026-03-23 01:02:30.449393 | orchestrator | 2026-03-23 01:02:30.449399 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:02:30.449406 | orchestrator | Monday 23 March 2026 01:02:29 +0000 (0:00:06.799) 0:02:45.440 ********** 2026-03-23 01:02:30.449413 | orchestrator | =============================================================================== 2026-03-23 01:02:30.449420 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.77s 2026-03-23 01:02:30.449427 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.24s 2026-03-23 01:02:30.449434 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.09s 2026-03-23 01:02:30.449441 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.17s 2026-03-23 01:02:30.449448 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 11.96s 2026-03-23 01:02:30.449458 | orchestrator | prometheus : Restart prometheus-cadvisor container ---------------------- 7.72s 2026-03-23 01:02:30.449465 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.80s 2026-03-23 01:02:30.449472 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.44s 2026-03-23 01:02:30.449479 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.76s 2026-03-23 01:02:30.449486 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.50s 2026-03-23 01:02:30.449493 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.46s 2026-03-23 01:02:30.449500 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.24s 2026-03-23 01:02:30.449517 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 4.99s 2026-03-23 01:02:30.449525 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.32s 2026-03-23 01:02:30.449532 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.29s 2026-03-23 01:02:30.449539 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.30s 2026-03-23 01:02:30.449546 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.83s 2026-03-23 01:02:30.449553 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.51s 2026-03-23 01:02:30.449565 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.24s 2026-03-23 01:02:30.449572 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 2.06s 2026-03-23 01:02:30.449579 | orchestrator | 2026-03-23 01:02:30 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:30.449586 | orchestrator | 2026-03-23 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:33.497756 | orchestrator | 2026-03-23 01:02:33 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:33.499002 | orchestrator | 2026-03-23 01:02:33 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:02:33.500904 | orchestrator | 2026-03-23 01:02:33 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state STARTED 2026-03-23 01:02:33.502579 | orchestrator | 2026-03-23 01:02:33 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:33.502618 | orchestrator | 2026-03-23 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:36.544721 | orchestrator | 2026-03-23 01:02:36 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:36.546532 | orchestrator | 2026-03-23 01:02:36 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:02:36.549108 | orchestrator | 2026-03-23 01:02:36 | INFO  | Task 94063cec-25fc-4516-8bd5-df77462cff14 is in state SUCCESS 2026-03-23 01:02:36.550131 | orchestrator | 2026-03-23 01:02:36.550178 | orchestrator | 2026-03-23 01:02:36.550184 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:02:36.550190 | orchestrator | 2026-03-23 01:02:36.550195 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:02:36.550200 | orchestrator | Monday 23 March 2026 00:59:51 +0000 (0:00:00.297) 0:00:00.297 ********** 2026-03-23 01:02:36.550205 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:02:36.550210 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:02:36.550214 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:02:36.550220 | orchestrator | 2026-03-23 01:02:36.550225 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:02:36.550230 | orchestrator | Monday 23 March 2026 00:59:51 +0000 (0:00:00.300) 0:00:00.597 ********** 2026-03-23 01:02:36.550235 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-23 01:02:36.550240 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-23 01:02:36.550245 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-23 01:02:36.550250 | orchestrator | 2026-03-23 01:02:36.550254 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-23 01:02:36.550259 | orchestrator | 2026-03-23 01:02:36.550275 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-23 01:02:36.550280 | orchestrator | Monday 23 March 2026 00:59:51 +0000 (0:00:00.301) 0:00:00.898 ********** 2026-03-23 01:02:36.550284 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:02:36.550290 | orchestrator | 2026-03-23 01:02:36.550295 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-23 01:02:36.550299 | orchestrator | Monday 23 March 2026 00:59:52 +0000 (0:00:00.675) 0:00:01.574 ********** 2026-03-23 01:02:36.550304 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-23 01:02:36.550308 | orchestrator | 2026-03-23 01:02:36.550313 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-23 01:02:36.550318 | orchestrator | Monday 23 March 2026 00:59:56 +0000 (0:00:04.118) 0:00:05.692 ********** 2026-03-23 01:02:36.550324 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-23 01:02:36.550343 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-23 01:02:36.550348 | orchestrator | 2026-03-23 01:02:36.550353 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-23 01:02:36.550358 | orchestrator | Monday 23 March 2026 01:00:02 +0000 (0:00:05.740) 0:00:11.433 ********** 2026-03-23 01:02:36.550363 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-23 01:02:36.550367 | orchestrator | 2026-03-23 01:02:36.550372 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-23 01:02:36.550377 | orchestrator | Monday 23 March 2026 01:00:06 +0000 (0:00:04.005) 0:00:15.439 ********** 2026-03-23 01:02:36.550382 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-23 01:02:36.550388 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-23 01:02:36.550393 | orchestrator | 2026-03-23 01:02:36.550398 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-23 01:02:36.550403 | orchestrator | Monday 23 March 2026 01:00:10 +0000 (0:00:04.171) 0:00:19.610 ********** 2026-03-23 01:02:36.550407 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-23 01:02:36.550412 | orchestrator | 2026-03-23 01:02:36.550416 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-23 01:02:36.550420 | orchestrator | Monday 23 March 2026 01:00:13 +0000 (0:00:02.829) 0:00:22.440 ********** 2026-03-23 01:02:36.550425 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-23 01:02:36.550429 | orchestrator | 2026-03-23 01:02:36.550434 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-23 01:02:36.550438 | orchestrator | Monday 23 March 2026 01:00:17 +0000 (0:00:03.841) 0:00:26.282 ********** 2026-03-23 01:02:36.550457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 01:02:36.550467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 01:02:36.550476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 01:02:36.550481 | orchestrator | 2026-03-23 01:02:36.550485 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-23 01:02:36.550490 | orchestrator | Monday 23 March 2026 01:00:21 +0000 (0:00:04.339) 0:00:30.621 ********** 2026-03-23 01:02:36.550495 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:02:36.550530 | orchestrator | 2026-03-23 01:02:36.550535 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-23 01:02:36.550543 | orchestrator | Monday 23 March 2026 01:00:22 +0000 (0:00:00.585) 0:00:31.206 ********** 2026-03-23 01:02:36.550548 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:36.550552 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:36.550557 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:36.550562 | orchestrator | 2026-03-23 01:02:36.550567 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-23 01:02:36.550572 | orchestrator | Monday 23 March 2026 01:00:25 +0000 (0:00:03.796) 0:00:35.003 ********** 2026-03-23 01:02:36.550576 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-23 01:02:36.550585 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-23 01:02:36.550589 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-23 01:02:36.550594 | orchestrator | 2026-03-23 01:02:36.550598 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-23 01:02:36.550605 | orchestrator | Monday 23 March 2026 01:00:27 +0000 (0:00:01.586) 0:00:36.589 ********** 2026-03-23 01:02:36.550609 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-23 01:02:36.550614 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-23 01:02:36.550619 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-23 01:02:36.550624 | orchestrator | 2026-03-23 01:02:36.550628 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-23 01:02:36.550633 | orchestrator | Monday 23 March 2026 01:00:28 +0000 (0:00:01.380) 0:00:37.970 ********** 2026-03-23 01:02:36.550639 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:02:36.550643 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:02:36.550648 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:02:36.550653 | orchestrator | 2026-03-23 01:02:36.550658 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-23 01:02:36.550663 | orchestrator | Monday 23 March 2026 01:00:29 +0000 (0:00:00.708) 0:00:38.679 ********** 2026-03-23 01:02:36.550667 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.550672 | orchestrator | 2026-03-23 01:02:36.550676 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-23 01:02:36.550681 | orchestrator | Monday 23 March 2026 01:00:29 +0000 (0:00:00.106) 0:00:38.785 ********** 2026-03-23 01:02:36.550685 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.550690 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:36.550695 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:36.550700 | orchestrator | 2026-03-23 01:02:36.550704 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-23 01:02:36.550709 | orchestrator | Monday 23 March 2026 01:00:29 +0000 (0:00:00.236) 0:00:39.021 ********** 2026-03-23 01:02:36.550713 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:02:36.550718 | orchestrator | 2026-03-23 01:02:36.550722 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-23 01:02:36.550727 | orchestrator | Monday 23 March 2026 01:00:30 +0000 (0:00:00.584) 0:00:39.606 ********** 2026-03-23 01:02:36.550732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 01:02:36.550749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 01:02:36.550756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 01:02:36.550760 | orchestrator | 2026-03-23 01:02:36.550765 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-23 01:02:36.550770 | orchestrator | Monday 23 March 2026 01:00:35 +0000 (0:00:05.395) 0:00:45.001 ********** 2026-03-23 01:02:36.550785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-23 01:02:36.550791 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:36.550797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-23 01:02:36.550802 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:36.550811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-23 01:02:36.550820 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.550825 | orchestrator | 2026-03-23 01:02:36.550831 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-23 01:02:36.550835 | orchestrator | Monday 23 March 2026 01:00:39 +0000 (0:00:03.331) 0:00:48.332 ********** 2026-03-23 01:02:36.550843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-23 01:02:36.550849 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.550854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-23 01:02:36.550863 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:36.550874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-23 01:02:36.550880 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:36.550885 | orchestrator | 2026-03-23 01:02:36.550891 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-23 01:02:36.550896 | orchestrator | Monday 23 March 2026 01:00:42 +0000 (0:00:03.449) 0:00:51.782 ********** 2026-03-23 01:02:36.550901 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:36.550906 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.550911 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:36.550916 | orchestrator | 2026-03-23 01:02:36.550921 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-23 01:02:36.550925 | orchestrator | Monday 23 March 2026 01:00:46 +0000 (0:00:03.360) 0:00:55.142 ********** 2026-03-23 01:02:36.550930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 01:02:36.550946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 01:02:36.550952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 01:02:36.550960 | orchestrator | 2026-03-23 01:02:36.550965 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-23 01:02:36.550970 | orchestrator | Monday 23 March 2026 01:00:49 +0000 (0:00:03.365) 0:00:58.508 ********** 2026-03-23 01:02:36.550975 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:36.550980 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:36.550984 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:36.550989 | orchestrator | 2026-03-23 01:02:36.550994 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-23 01:02:36.550999 | orchestrator | Monday 23 March 2026 01:00:56 +0000 (0:00:06.964) 0:01:05.472 ********** 2026-03-23 01:02:36.551004 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.551008 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:36.551013 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:36.551017 | orchestrator | 2026-03-23 01:02:36.551021 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-23 01:02:36.551025 | orchestrator | Monday 23 March 2026 01:01:01 +0000 (0:00:05.339) 0:01:10.812 ********** 2026-03-23 01:02:36.551030 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.551034 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:36.551039 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:36.551043 | orchestrator | 2026-03-23 01:02:36.551047 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-23 01:02:36.551052 | orchestrator | Monday 23 March 2026 01:01:05 +0000 (0:00:04.303) 0:01:15.115 ********** 2026-03-23 01:02:36.551056 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.551060 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:36.551068 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:36.551073 | orchestrator | 2026-03-23 01:02:36.551078 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-23 01:02:36.551082 | orchestrator | Monday 23 March 2026 01:01:10 +0000 (0:00:04.295) 0:01:19.410 ********** 2026-03-23 01:02:36.551087 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.551092 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:36.551097 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:36.551102 | orchestrator | 2026-03-23 01:02:36.551107 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-23 01:02:36.551112 | orchestrator | Monday 23 March 2026 01:01:13 +0000 (0:00:03.676) 0:01:23.087 ********** 2026-03-23 01:02:36.551116 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.551121 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:36.551125 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:36.551129 | orchestrator | 2026-03-23 01:02:36.551134 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-23 01:02:36.551139 | orchestrator | Monday 23 March 2026 01:01:14 +0000 (0:00:00.367) 0:01:23.454 ********** 2026-03-23 01:02:36.551148 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-23 01:02:36.551153 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:36.551158 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-23 01:02:36.551163 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.551168 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-23 01:02:36.551173 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:36.551178 | orchestrator | 2026-03-23 01:02:36.551183 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-23 01:02:36.551189 | orchestrator | Monday 23 March 2026 01:01:17 +0000 (0:00:03.224) 0:01:26.679 ********** 2026-03-23 01:02:36.551199 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.551205 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:36.551210 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:36.551215 | orchestrator | 2026-03-23 01:02:36.551220 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-03-23 01:02:36.551225 | orchestrator | Monday 23 March 2026 01:01:21 +0000 (0:00:03.965) 0:01:30.645 ********** 2026-03-23 01:02:36.551229 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:36.551234 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.551239 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:36.551244 | orchestrator | 2026-03-23 01:02:36.551248 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-23 01:02:36.551252 | orchestrator | Monday 23 March 2026 01:01:24 +0000 (0:00:03.364) 0:01:34.009 ********** 2026-03-23 01:02:36.551258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 01:02:36.551271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 01:02:36.551281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-23 01:02:36.551287 | orchestrator | 2026-03-23 01:02:36.551291 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-23 01:02:36.551297 | orchestrator | Monday 23 March 2026 01:01:28 +0000 (0:00:03.916) 0:01:37.926 ********** 2026-03-23 01:02:36.551302 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:36.551307 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:36.551312 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:36.551317 | orchestrator | 2026-03-23 01:02:36.551322 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-23 01:02:36.551328 | orchestrator | Monday 23 March 2026 01:01:29 +0000 (0:00:00.378) 0:01:38.305 ********** 2026-03-23 01:02:36.551332 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:36.551337 | orchestrator | 2026-03-23 01:02:36.551342 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-23 01:02:36.551347 | orchestrator | Monday 23 March 2026 01:01:31 +0000 (0:00:02.597) 0:01:40.902 ********** 2026-03-23 01:02:36.551352 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:36.551357 | orchestrator | 2026-03-23 01:02:36.551362 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-23 01:02:36.551367 | orchestrator | Monday 23 March 2026 01:01:34 +0000 (0:00:03.118) 0:01:44.021 ********** 2026-03-23 01:02:36.551372 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:36.551376 | orchestrator | 2026-03-23 01:02:36.551381 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-23 01:02:36.551386 | orchestrator | Monday 23 March 2026 01:01:36 +0000 (0:00:02.099) 0:01:46.121 ********** 2026-03-23 01:02:36.551390 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:36.551395 | orchestrator | 2026-03-23 01:02:36.551399 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-23 01:02:36.551404 | orchestrator | Monday 23 March 2026 01:02:04 +0000 (0:00:27.257) 0:02:13.378 ********** 2026-03-23 01:02:36.551409 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:36.551414 | orchestrator | 2026-03-23 01:02:36.551422 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-23 01:02:36.551431 | orchestrator | Monday 23 March 2026 01:02:05 +0000 (0:00:01.724) 0:02:15.103 ********** 2026-03-23 01:02:36.551436 | orchestrator | 2026-03-23 01:02:36.551440 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-23 01:02:36.551445 | orchestrator | Monday 23 March 2026 01:02:06 +0000 (0:00:00.060) 0:02:15.164 ********** 2026-03-23 01:02:36.551449 | orchestrator | 2026-03-23 01:02:36.551454 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-23 01:02:36.551459 | orchestrator | Monday 23 March 2026 01:02:06 +0000 (0:00:00.058) 0:02:15.223 ********** 2026-03-23 01:02:36.551463 | orchestrator | 2026-03-23 01:02:36.551468 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-23 01:02:36.551473 | orchestrator | Monday 23 March 2026 01:02:06 +0000 (0:00:00.061) 0:02:15.285 ********** 2026-03-23 01:02:36.551478 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:36.551482 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:36.551487 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:36.551491 | orchestrator | 2026-03-23 01:02:36.551512 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:02:36.551520 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-03-23 01:02:36.551526 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-23 01:02:36.551531 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-23 01:02:36.551535 | orchestrator | 2026-03-23 01:02:36.551540 | orchestrator | 2026-03-23 01:02:36.551545 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:02:36.551550 | orchestrator | Monday 23 March 2026 01:02:34 +0000 (0:00:28.752) 0:02:44.037 ********** 2026-03-23 01:02:36.551555 | orchestrator | =============================================================================== 2026-03-23 01:02:36.551560 | orchestrator | glance : Restart glance-api container ---------------------------------- 28.75s 2026-03-23 01:02:36.551565 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.26s 2026-03-23 01:02:36.551570 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.96s 2026-03-23 01:02:36.551575 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.74s 2026-03-23 01:02:36.551580 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.40s 2026-03-23 01:02:36.551585 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.34s 2026-03-23 01:02:36.551590 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.34s 2026-03-23 01:02:36.551594 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.30s 2026-03-23 01:02:36.551599 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.30s 2026-03-23 01:02:36.551604 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.17s 2026-03-23 01:02:36.551609 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.12s 2026-03-23 01:02:36.551613 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 4.01s 2026-03-23 01:02:36.551618 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.97s 2026-03-23 01:02:36.551623 | orchestrator | glance : Check glance containers ---------------------------------------- 3.92s 2026-03-23 01:02:36.551628 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.84s 2026-03-23 01:02:36.551633 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.80s 2026-03-23 01:02:36.551638 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.68s 2026-03-23 01:02:36.551642 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.45s 2026-03-23 01:02:36.551651 | orchestrator | glance : Copying over config.json files for services -------------------- 3.37s 2026-03-23 01:02:36.551656 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 3.36s 2026-03-23 01:02:36.551661 | orchestrator | 2026-03-23 01:02:36 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:02:36.552632 | orchestrator | 2026-03-23 01:02:36 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:36.552727 | orchestrator | 2026-03-23 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:39.585271 | orchestrator | 2026-03-23 01:02:39 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:39.586839 | orchestrator | 2026-03-23 01:02:39 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:02:39.597167 | orchestrator | 2026-03-23 01:02:39 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:02:39.599481 | orchestrator | 2026-03-23 01:02:39 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:39.599555 | orchestrator | 2026-03-23 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:42.641726 | orchestrator | 2026-03-23 01:02:42 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:42.642885 | orchestrator | 2026-03-23 01:02:42 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:02:42.644501 | orchestrator | 2026-03-23 01:02:42 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:02:42.646893 | orchestrator | 2026-03-23 01:02:42 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:42.646928 | orchestrator | 2026-03-23 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:45.688432 | orchestrator | 2026-03-23 01:02:45 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:45.692198 | orchestrator | 2026-03-23 01:02:45 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:02:45.693531 | orchestrator | 2026-03-23 01:02:45 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:02:45.695292 | orchestrator | 2026-03-23 01:02:45 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:45.695525 | orchestrator | 2026-03-23 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:48.747841 | orchestrator | 2026-03-23 01:02:48 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:48.750071 | orchestrator | 2026-03-23 01:02:48 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:02:48.751345 | orchestrator | 2026-03-23 01:02:48 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:02:48.753011 | orchestrator | 2026-03-23 01:02:48 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state STARTED 2026-03-23 01:02:48.753051 | orchestrator | 2026-03-23 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:51.795004 | orchestrator | 2026-03-23 01:02:51 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:51.798264 | orchestrator | 2026-03-23 01:02:51 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:02:51.801356 | orchestrator | 2026-03-23 01:02:51 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:02:51.806355 | orchestrator | 2026-03-23 01:02:51 | INFO  | Task 408a4784-1c9e-48cf-a87c-cdcb2102c280 is in state SUCCESS 2026-03-23 01:02:51.809364 | orchestrator | 2026-03-23 01:02:51.809449 | orchestrator | 2026-03-23 01:02:51.809456 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:02:51.809461 | orchestrator | 2026-03-23 01:02:51.809465 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:02:51.809470 | orchestrator | Monday 23 March 2026 00:59:57 +0000 (0:00:00.270) 0:00:00.270 ********** 2026-03-23 01:02:51.809528 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:02:51.809534 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:02:51.809538 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:02:51.809542 | orchestrator | 2026-03-23 01:02:51.809547 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:02:51.809551 | orchestrator | Monday 23 March 2026 00:59:57 +0000 (0:00:00.246) 0:00:00.516 ********** 2026-03-23 01:02:51.809555 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-23 01:02:51.809560 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-23 01:02:51.809564 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-23 01:02:51.809568 | orchestrator | 2026-03-23 01:02:51.809572 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-23 01:02:51.809577 | orchestrator | 2026-03-23 01:02:51.809581 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-23 01:02:51.809585 | orchestrator | Monday 23 March 2026 00:59:58 +0000 (0:00:00.248) 0:00:00.764 ********** 2026-03-23 01:02:51.809591 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:02:51.809766 | orchestrator | 2026-03-23 01:02:51.809775 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-23 01:02:51.809779 | orchestrator | Monday 23 March 2026 00:59:59 +0000 (0:00:00.916) 0:00:01.681 ********** 2026-03-23 01:02:51.809783 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-23 01:02:51.809788 | orchestrator | 2026-03-23 01:02:51.809792 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-23 01:02:51.809830 | orchestrator | Monday 23 March 2026 01:00:03 +0000 (0:00:03.853) 0:00:05.535 ********** 2026-03-23 01:02:51.809835 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-23 01:02:51.809840 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-23 01:02:51.809844 | orchestrator | 2026-03-23 01:02:51.809848 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-23 01:02:51.809852 | orchestrator | Monday 23 March 2026 01:00:10 +0000 (0:00:07.248) 0:00:12.784 ********** 2026-03-23 01:02:51.809856 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-23 01:02:51.809861 | orchestrator | 2026-03-23 01:02:51.809865 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-23 01:02:51.809869 | orchestrator | Monday 23 March 2026 01:00:13 +0000 (0:00:02.879) 0:00:15.663 ********** 2026-03-23 01:02:51.809884 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-23 01:02:51.809892 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-23 01:02:51.809902 | orchestrator | 2026-03-23 01:02:51.809910 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-23 01:02:51.809917 | orchestrator | Monday 23 March 2026 01:00:16 +0000 (0:00:03.396) 0:00:19.059 ********** 2026-03-23 01:02:51.809958 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-23 01:02:51.809964 | orchestrator | 2026-03-23 01:02:51.809968 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-23 01:02:51.809973 | orchestrator | Monday 23 March 2026 01:00:19 +0000 (0:00:02.837) 0:00:21.897 ********** 2026-03-23 01:02:51.809977 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-23 01:02:51.810175 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-23 01:02:51.810189 | orchestrator | 2026-03-23 01:02:51.810196 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-23 01:02:51.810203 | orchestrator | Monday 23 March 2026 01:00:27 +0000 (0:00:08.048) 0:00:29.945 ********** 2026-03-23 01:02:51.810213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.810251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.810260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.810267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.810274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.810290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.810299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.810324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.810333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.810340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.810346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.810361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.810369 | orchestrator | 2026-03-23 01:02:51.810375 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-23 01:02:51.810382 | orchestrator | Monday 23 March 2026 01:00:30 +0000 (0:00:03.050) 0:00:32.996 ********** 2026-03-23 01:02:51.810390 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:51.810397 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:51.810404 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:51.810411 | orchestrator | 2026-03-23 01:02:51.810417 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-23 01:02:51.810424 | orchestrator | Monday 23 March 2026 01:00:30 +0000 (0:00:00.290) 0:00:33.287 ********** 2026-03-23 01:02:51.810432 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:02:51.810438 | orchestrator | 2026-03-23 01:02:51.810445 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-23 01:02:51.810452 | orchestrator | Monday 23 March 2026 01:00:31 +0000 (0:00:00.538) 0:00:33.825 ********** 2026-03-23 01:02:51.810517 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-23 01:02:51.810527 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-23 01:02:51.810533 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-23 01:02:51.810540 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-23 01:02:51.810547 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-23 01:02:51.810553 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-23 01:02:51.810560 | orchestrator | 2026-03-23 01:02:51.810567 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-23 01:02:51.810572 | orchestrator | Monday 23 March 2026 01:00:33 +0000 (0:00:02.308) 0:00:36.134 ********** 2026-03-23 01:02:51.810577 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-23 01:02:51.810583 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-23 01:02:51.810600 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-23 01:02:51.810608 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-23 01:02:51.810658 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-23 01:02:51.810668 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-23 01:02:51.810678 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-23 01:02:51.810697 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-23 01:02:51.810704 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-23 01:02:51.810733 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-23 01:02:51.810742 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-23 01:02:51.810749 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-23 01:02:51.810761 | orchestrator | 2026-03-23 01:02:51.810769 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-23 01:02:51.810774 | orchestrator | Monday 23 March 2026 01:00:37 +0000 (0:00:04.182) 0:00:40.316 ********** 2026-03-23 01:02:51.810778 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-23 01:02:51.810783 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-23 01:02:51.810788 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-23 01:02:51.810792 | orchestrator | 2026-03-23 01:02:51.810797 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-23 01:02:51.810802 | orchestrator | Monday 23 March 2026 01:00:39 +0000 (0:00:01.955) 0:00:42.271 ********** 2026-03-23 01:02:51.810807 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-23 01:02:51.810812 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-23 01:02:51.810817 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-23 01:02:51.810822 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-23 01:02:51.810827 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-23 01:02:51.810834 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-23 01:02:51.810839 | orchestrator | 2026-03-23 01:02:51.810844 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-23 01:02:51.810849 | orchestrator | Monday 23 March 2026 01:00:42 +0000 (0:00:03.122) 0:00:45.393 ********** 2026-03-23 01:02:51.810854 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-23 01:02:51.810859 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-23 01:02:51.810864 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-23 01:02:51.810869 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-23 01:02:51.810873 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-23 01:02:51.810878 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-23 01:02:51.810884 | orchestrator | 2026-03-23 01:02:51.810891 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-23 01:02:51.810901 | orchestrator | Monday 23 March 2026 01:00:43 +0000 (0:00:01.007) 0:00:46.401 ********** 2026-03-23 01:02:51.810909 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:51.810915 | orchestrator | 2026-03-23 01:02:51.810922 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-23 01:02:51.810928 | orchestrator | Monday 23 March 2026 01:00:44 +0000 (0:00:00.156) 0:00:46.558 ********** 2026-03-23 01:02:51.810932 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:51.810936 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:51.810940 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:51.810944 | orchestrator | 2026-03-23 01:02:51.810948 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-23 01:02:51.810953 | orchestrator | Monday 23 March 2026 01:00:44 +0000 (0:00:00.617) 0:00:47.175 ********** 2026-03-23 01:02:51.810957 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:02:51.810961 | orchestrator | 2026-03-23 01:02:51.810965 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-23 01:02:51.810987 | orchestrator | Monday 23 March 2026 01:00:45 +0000 (0:00:00.674) 0:00:47.850 ********** 2026-03-23 01:02:51.810998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.811003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.811007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.811012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811142 | orchestrator | 2026-03-23 01:02:51.811146 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-23 01:02:51.811150 | orchestrator | Monday 23 March 2026 01:00:49 +0000 (0:00:03.793) 0:00:51.643 ********** 2026-03-23 01:02:51.811155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-23 01:02:51.811159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811177 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:51.811185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-23 01:02:51.811189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811202 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:51.811209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-23 01:02:51.811213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811232 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:51.811236 | orchestrator | 2026-03-23 01:02:51.811240 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-23 01:02:51.811244 | orchestrator | Monday 23 March 2026 01:00:50 +0000 (0:00:00.948) 0:00:52.592 ********** 2026-03-23 01:02:51.811249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-23 01:02:51.811255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811273 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:51.811278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-23 01:02:51.811282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811300 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:51.811304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-23 01:02:51.811311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811324 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:51.811328 | orchestrator | 2026-03-23 01:02:51.811332 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-23 01:02:51.811336 | orchestrator | Monday 23 March 2026 01:00:51 +0000 (0:00:01.459) 0:00:54.051 ********** 2026-03-23 01:02:51.811342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.811350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.811357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.811361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811411 | orchestrator | 2026-03-23 01:02:51.811415 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-23 01:02:51.811421 | orchestrator | Monday 23 March 2026 01:00:56 +0000 (0:00:04.960) 0:00:59.012 ********** 2026-03-23 01:02:51.811426 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-23 01:02:51.811430 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-23 01:02:51.811434 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-23 01:02:51.811438 | orchestrator | 2026-03-23 01:02:51.811442 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-23 01:02:51.811446 | orchestrator | Monday 23 March 2026 01:00:59 +0000 (0:00:02.663) 0:01:01.675 ********** 2026-03-23 01:02:51.811453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.811458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.811463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.811467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811577 | orchestrator | 2026-03-23 01:02:51.811581 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-23 01:02:51.811588 | orchestrator | Monday 23 March 2026 01:01:13 +0000 (0:00:14.497) 0:01:16.173 ********** 2026-03-23 01:02:51.811598 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:51.811606 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:51.811612 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:51.811619 | orchestrator | 2026-03-23 01:02:51.811627 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-03-23 01:02:51.811635 | orchestrator | Monday 23 March 2026 01:01:15 +0000 (0:00:01.549) 0:01:17.723 ********** 2026-03-23 01:02:51.811639 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:51.811643 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:51.811647 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:51.811651 | orchestrator | 2026-03-23 01:02:51.811655 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-23 01:02:51.811659 | orchestrator | Monday 23 March 2026 01:01:16 +0000 (0:00:01.746) 0:01:19.470 ********** 2026-03-23 01:02:51.811664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-23 01:02:51.811675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811697 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:51.811704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-23 01:02:51.811708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811724 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:51.811731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-23 01:02:51.811736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-23 01:02:51.811754 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:51.811758 | orchestrator | 2026-03-23 01:02:51.811762 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-23 01:02:51.811766 | orchestrator | Monday 23 March 2026 01:01:17 +0000 (0:00:00.795) 0:01:20.265 ********** 2026-03-23 01:02:51.811770 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:51.811774 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:51.811778 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:51.811783 | orchestrator | 2026-03-23 01:02:51.811787 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-23 01:02:51.811791 | orchestrator | Monday 23 March 2026 01:01:18 +0000 (0:00:00.387) 0:01:20.652 ********** 2026-03-23 01:02:51.811795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.811801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.811806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-23 01:02:51.811814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-23 01:02:51.811865 | orchestrator | 2026-03-23 01:02:51.811869 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-23 01:02:51.811873 | orchestrator | Monday 23 March 2026 01:01:21 +0000 (0:00:03.239) 0:01:23.891 ********** 2026-03-23 01:02:51.811877 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:51.811881 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:02:51.811888 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:02:51.811897 | orchestrator | 2026-03-23 01:02:51.811906 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-23 01:02:51.811913 | orchestrator | Monday 23 March 2026 01:01:21 +0000 (0:00:00.319) 0:01:24.210 ********** 2026-03-23 01:02:51.811919 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:51.811927 | orchestrator | 2026-03-23 01:02:51.811932 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-23 01:02:51.811936 | orchestrator | Monday 23 March 2026 01:01:23 +0000 (0:00:01.762) 0:01:25.974 ********** 2026-03-23 01:02:51.811940 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:51.811944 | orchestrator | 2026-03-23 01:02:51.811948 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-23 01:02:51.811952 | orchestrator | Monday 23 March 2026 01:01:25 +0000 (0:00:02.095) 0:01:28.069 ********** 2026-03-23 01:02:51.811956 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:51.811960 | orchestrator | 2026-03-23 01:02:51.811967 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-23 01:02:51.811971 | orchestrator | Monday 23 March 2026 01:01:46 +0000 (0:00:20.674) 0:01:48.744 ********** 2026-03-23 01:02:51.811976 | orchestrator | 2026-03-23 01:02:51.811980 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-23 01:02:51.811984 | orchestrator | Monday 23 March 2026 01:01:46 +0000 (0:00:00.062) 0:01:48.806 ********** 2026-03-23 01:02:51.811988 | orchestrator | 2026-03-23 01:02:51.811992 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-23 01:02:51.811996 | orchestrator | Monday 23 March 2026 01:01:46 +0000 (0:00:00.058) 0:01:48.864 ********** 2026-03-23 01:02:51.812000 | orchestrator | 2026-03-23 01:02:51.812004 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-23 01:02:51.812008 | orchestrator | Monday 23 March 2026 01:01:46 +0000 (0:00:00.062) 0:01:48.927 ********** 2026-03-23 01:02:51.812012 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:51.812016 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:51.812024 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:51.812028 | orchestrator | 2026-03-23 01:02:51.812032 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-23 01:02:51.812036 | orchestrator | Monday 23 March 2026 01:02:11 +0000 (0:00:24.729) 0:02:13.657 ********** 2026-03-23 01:02:51.812040 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:51.812044 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:51.812048 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:51.812052 | orchestrator | 2026-03-23 01:02:51.812057 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-23 01:02:51.812061 | orchestrator | Monday 23 March 2026 01:02:23 +0000 (0:00:12.226) 0:02:25.883 ********** 2026-03-23 01:02:51.812065 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:51.812069 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:51.812073 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:51.812077 | orchestrator | 2026-03-23 01:02:51.812081 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-23 01:02:51.812088 | orchestrator | Monday 23 March 2026 01:02:45 +0000 (0:00:22.042) 0:02:47.926 ********** 2026-03-23 01:02:51.812092 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:02:51.812097 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:02:51.812101 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:02:51.812105 | orchestrator | 2026-03-23 01:02:51.812109 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-23 01:02:51.812113 | orchestrator | Monday 23 March 2026 01:02:50 +0000 (0:00:04.783) 0:02:52.710 ********** 2026-03-23 01:02:51.812117 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:02:51.812121 | orchestrator | 2026-03-23 01:02:51.812125 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:02:51.812130 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-23 01:02:51.812135 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 01:02:51.812141 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 01:02:51.812148 | orchestrator | 2026-03-23 01:02:51.812154 | orchestrator | 2026-03-23 01:02:51.812162 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:02:51.812166 | orchestrator | Monday 23 March 2026 01:02:50 +0000 (0:00:00.234) 0:02:52.944 ********** 2026-03-23 01:02:51.812170 | orchestrator | =============================================================================== 2026-03-23 01:02:51.812174 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.73s 2026-03-23 01:02:51.812178 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 22.04s 2026-03-23 01:02:51.812183 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.67s 2026-03-23 01:02:51.812187 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.50s 2026-03-23 01:02:51.812191 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.23s 2026-03-23 01:02:51.812195 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.05s 2026-03-23 01:02:51.812199 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.25s 2026-03-23 01:02:51.812203 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.96s 2026-03-23 01:02:51.812207 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 4.78s 2026-03-23 01:02:51.812211 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.18s 2026-03-23 01:02:51.812215 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.85s 2026-03-23 01:02:51.812219 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.79s 2026-03-23 01:02:51.812227 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.40s 2026-03-23 01:02:51.812231 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.24s 2026-03-23 01:02:51.812235 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.12s 2026-03-23 01:02:51.812239 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.05s 2026-03-23 01:02:51.812243 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.88s 2026-03-23 01:02:51.812248 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 2.84s 2026-03-23 01:02:51.812252 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.66s 2026-03-23 01:02:51.812258 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.31s 2026-03-23 01:02:51.812262 | orchestrator | 2026-03-23 01:02:51 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:02:51.812267 | orchestrator | 2026-03-23 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:54.856328 | orchestrator | 2026-03-23 01:02:54 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:54.858118 | orchestrator | 2026-03-23 01:02:54 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:02:54.859356 | orchestrator | 2026-03-23 01:02:54 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:02:54.860946 | orchestrator | 2026-03-23 01:02:54 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:02:54.861094 | orchestrator | 2026-03-23 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:02:57.900916 | orchestrator | 2026-03-23 01:02:57 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:02:57.902969 | orchestrator | 2026-03-23 01:02:57 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:02:57.904250 | orchestrator | 2026-03-23 01:02:57 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:02:57.906488 | orchestrator | 2026-03-23 01:02:57 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:02:57.906526 | orchestrator | 2026-03-23 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:00.945903 | orchestrator | 2026-03-23 01:03:00 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:00.947751 | orchestrator | 2026-03-23 01:03:00 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:00.950178 | orchestrator | 2026-03-23 01:03:00 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:00.953683 | orchestrator | 2026-03-23 01:03:00 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:00.953741 | orchestrator | 2026-03-23 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:03.996712 | orchestrator | 2026-03-23 01:03:03 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:03.998858 | orchestrator | 2026-03-23 01:03:03 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:04.002279 | orchestrator | 2026-03-23 01:03:04 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:04.005644 | orchestrator | 2026-03-23 01:03:04 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:04.005826 | orchestrator | 2026-03-23 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:07.049372 | orchestrator | 2026-03-23 01:03:07 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:07.060276 | orchestrator | 2026-03-23 01:03:07 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:07.060329 | orchestrator | 2026-03-23 01:03:07 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:07.060338 | orchestrator | 2026-03-23 01:03:07 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:07.060345 | orchestrator | 2026-03-23 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:10.079959 | orchestrator | 2026-03-23 01:03:10 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:10.080572 | orchestrator | 2026-03-23 01:03:10 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:10.082086 | orchestrator | 2026-03-23 01:03:10 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:10.082668 | orchestrator | 2026-03-23 01:03:10 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:10.082817 | orchestrator | 2026-03-23 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:13.125026 | orchestrator | 2026-03-23 01:03:13 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:13.127720 | orchestrator | 2026-03-23 01:03:13 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:13.129713 | orchestrator | 2026-03-23 01:03:13 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:13.131353 | orchestrator | 2026-03-23 01:03:13 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:13.131469 | orchestrator | 2026-03-23 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:16.165053 | orchestrator | 2026-03-23 01:03:16 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:16.166765 | orchestrator | 2026-03-23 01:03:16 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:16.168247 | orchestrator | 2026-03-23 01:03:16 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:16.169734 | orchestrator | 2026-03-23 01:03:16 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:16.169849 | orchestrator | 2026-03-23 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:19.197574 | orchestrator | 2026-03-23 01:03:19 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:19.197742 | orchestrator | 2026-03-23 01:03:19 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:19.198641 | orchestrator | 2026-03-23 01:03:19 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:19.199080 | orchestrator | 2026-03-23 01:03:19 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:19.199108 | orchestrator | 2026-03-23 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:22.226535 | orchestrator | 2026-03-23 01:03:22 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:22.227404 | orchestrator | 2026-03-23 01:03:22 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:22.231393 | orchestrator | 2026-03-23 01:03:22 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:22.232307 | orchestrator | 2026-03-23 01:03:22 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:22.232396 | orchestrator | 2026-03-23 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:25.259328 | orchestrator | 2026-03-23 01:03:25 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:25.262489 | orchestrator | 2026-03-23 01:03:25 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:25.264269 | orchestrator | 2026-03-23 01:03:25 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:25.265673 | orchestrator | 2026-03-23 01:03:25 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:25.265709 | orchestrator | 2026-03-23 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:28.293678 | orchestrator | 2026-03-23 01:03:28 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:28.294965 | orchestrator | 2026-03-23 01:03:28 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:28.297287 | orchestrator | 2026-03-23 01:03:28 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:28.298272 | orchestrator | 2026-03-23 01:03:28 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:28.298349 | orchestrator | 2026-03-23 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:31.328817 | orchestrator | 2026-03-23 01:03:31 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:31.329254 | orchestrator | 2026-03-23 01:03:31 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:31.330085 | orchestrator | 2026-03-23 01:03:31 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:31.330845 | orchestrator | 2026-03-23 01:03:31 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:31.330874 | orchestrator | 2026-03-23 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:34.358910 | orchestrator | 2026-03-23 01:03:34 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:34.360094 | orchestrator | 2026-03-23 01:03:34 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:34.360138 | orchestrator | 2026-03-23 01:03:34 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:34.360875 | orchestrator | 2026-03-23 01:03:34 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:34.360901 | orchestrator | 2026-03-23 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:37.390605 | orchestrator | 2026-03-23 01:03:37 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:37.391116 | orchestrator | 2026-03-23 01:03:37 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:37.391862 | orchestrator | 2026-03-23 01:03:37 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:37.392475 | orchestrator | 2026-03-23 01:03:37 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:37.392579 | orchestrator | 2026-03-23 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:40.428746 | orchestrator | 2026-03-23 01:03:40 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:40.430169 | orchestrator | 2026-03-23 01:03:40 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:40.430819 | orchestrator | 2026-03-23 01:03:40 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:40.431203 | orchestrator | 2026-03-23 01:03:40 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:40.431230 | orchestrator | 2026-03-23 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:43.453700 | orchestrator | 2026-03-23 01:03:43 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:43.453838 | orchestrator | 2026-03-23 01:03:43 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:43.454668 | orchestrator | 2026-03-23 01:03:43 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:43.455196 | orchestrator | 2026-03-23 01:03:43 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:43.455219 | orchestrator | 2026-03-23 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:46.484344 | orchestrator | 2026-03-23 01:03:46 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:46.485427 | orchestrator | 2026-03-23 01:03:46 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:46.485826 | orchestrator | 2026-03-23 01:03:46 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:46.486498 | orchestrator | 2026-03-23 01:03:46 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:46.486527 | orchestrator | 2026-03-23 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:49.514880 | orchestrator | 2026-03-23 01:03:49 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:49.515038 | orchestrator | 2026-03-23 01:03:49 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:49.515779 | orchestrator | 2026-03-23 01:03:49 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:49.516423 | orchestrator | 2026-03-23 01:03:49 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:49.516453 | orchestrator | 2026-03-23 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:52.563524 | orchestrator | 2026-03-23 01:03:52 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:52.563858 | orchestrator | 2026-03-23 01:03:52 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:52.565898 | orchestrator | 2026-03-23 01:03:52 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:52.566529 | orchestrator | 2026-03-23 01:03:52 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:52.566553 | orchestrator | 2026-03-23 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:55.585197 | orchestrator | 2026-03-23 01:03:55 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:55.585487 | orchestrator | 2026-03-23 01:03:55 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:55.586004 | orchestrator | 2026-03-23 01:03:55 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:55.586574 | orchestrator | 2026-03-23 01:03:55 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:55.586666 | orchestrator | 2026-03-23 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:03:58.607773 | orchestrator | 2026-03-23 01:03:58 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:03:58.608124 | orchestrator | 2026-03-23 01:03:58 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:03:58.608687 | orchestrator | 2026-03-23 01:03:58 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:03:58.609213 | orchestrator | 2026-03-23 01:03:58 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:03:58.609228 | orchestrator | 2026-03-23 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:01.633734 | orchestrator | 2026-03-23 01:04:01 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:01.634596 | orchestrator | 2026-03-23 01:04:01 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:01.635194 | orchestrator | 2026-03-23 01:04:01 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:04:01.635952 | orchestrator | 2026-03-23 01:04:01 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:01.635988 | orchestrator | 2026-03-23 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:04.683060 | orchestrator | 2026-03-23 01:04:04 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:04.683147 | orchestrator | 2026-03-23 01:04:04 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:04.683763 | orchestrator | 2026-03-23 01:04:04 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:04:04.684733 | orchestrator | 2026-03-23 01:04:04 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:04.684856 | orchestrator | 2026-03-23 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:07.707456 | orchestrator | 2026-03-23 01:04:07 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:07.707776 | orchestrator | 2026-03-23 01:04:07 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:07.708460 | orchestrator | 2026-03-23 01:04:07 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:04:07.709064 | orchestrator | 2026-03-23 01:04:07 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:07.709079 | orchestrator | 2026-03-23 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:10.730978 | orchestrator | 2026-03-23 01:04:10 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:10.732832 | orchestrator | 2026-03-23 01:04:10 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:10.733242 | orchestrator | 2026-03-23 01:04:10 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:04:10.733726 | orchestrator | 2026-03-23 01:04:10 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:10.733820 | orchestrator | 2026-03-23 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:13.758482 | orchestrator | 2026-03-23 01:04:13 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:13.758674 | orchestrator | 2026-03-23 01:04:13 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:13.759780 | orchestrator | 2026-03-23 01:04:13 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:04:13.760204 | orchestrator | 2026-03-23 01:04:13 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:13.760241 | orchestrator | 2026-03-23 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:16.789085 | orchestrator | 2026-03-23 01:04:16 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:16.789165 | orchestrator | 2026-03-23 01:04:16 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:16.789177 | orchestrator | 2026-03-23 01:04:16 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:04:16.790009 | orchestrator | 2026-03-23 01:04:16 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:16.790088 | orchestrator | 2026-03-23 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:19.829586 | orchestrator | 2026-03-23 01:04:19 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:19.830924 | orchestrator | 2026-03-23 01:04:19 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:19.832681 | orchestrator | 2026-03-23 01:04:19 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:04:19.833833 | orchestrator | 2026-03-23 01:04:19 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:19.833881 | orchestrator | 2026-03-23 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:22.858796 | orchestrator | 2026-03-23 01:04:22 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:22.860133 | orchestrator | 2026-03-23 01:04:22 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:22.860768 | orchestrator | 2026-03-23 01:04:22 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:04:22.862956 | orchestrator | 2026-03-23 01:04:22 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:22.862977 | orchestrator | 2026-03-23 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:25.887134 | orchestrator | 2026-03-23 01:04:25 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:25.887777 | orchestrator | 2026-03-23 01:04:25 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:25.888138 | orchestrator | 2026-03-23 01:04:25 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:04:25.888965 | orchestrator | 2026-03-23 01:04:25 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:25.888996 | orchestrator | 2026-03-23 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:28.918972 | orchestrator | 2026-03-23 01:04:28 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:28.920106 | orchestrator | 2026-03-23 01:04:28 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:28.922256 | orchestrator | 2026-03-23 01:04:28 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state STARTED 2026-03-23 01:04:28.922292 | orchestrator | 2026-03-23 01:04:28 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:28.922296 | orchestrator | 2026-03-23 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:31.981872 | orchestrator | 2026-03-23 01:04:31 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:31.981973 | orchestrator | 2026-03-23 01:04:31 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:31.990460 | orchestrator | 2026-03-23 01:04:31 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:04:31.990506 | orchestrator | 2026-03-23 01:04:31 | INFO  | Task 5b43eb6e-79a3-4678-bf24-dd5d95389acd is in state SUCCESS 2026-03-23 01:04:31.990510 | orchestrator | 2026-03-23 01:04:31 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:31.990514 | orchestrator | 2026-03-23 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:31.991096 | orchestrator | 2026-03-23 01:04:31.991115 | orchestrator | 2026-03-23 01:04:31.991119 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:04:31.991123 | orchestrator | 2026-03-23 01:04:31.991132 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:04:31.991138 | orchestrator | Monday 23 March 2026 01:02:38 +0000 (0:00:00.279) 0:00:00.279 ********** 2026-03-23 01:04:31.991143 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:04:31.991149 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:04:31.991153 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:04:31.991158 | orchestrator | 2026-03-23 01:04:31.991163 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:04:31.991168 | orchestrator | Monday 23 March 2026 01:02:38 +0000 (0:00:00.255) 0:00:00.535 ********** 2026-03-23 01:04:31.991173 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-23 01:04:31.991178 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-23 01:04:31.991183 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-23 01:04:31.991188 | orchestrator | 2026-03-23 01:04:31.991193 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-23 01:04:31.991198 | orchestrator | 2026-03-23 01:04:31.991203 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-23 01:04:31.991209 | orchestrator | Monday 23 March 2026 01:02:38 +0000 (0:00:00.276) 0:00:00.811 ********** 2026-03-23 01:04:31.991215 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:04:31.991221 | orchestrator | 2026-03-23 01:04:31.991243 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-23 01:04:31.991253 | orchestrator | Monday 23 March 2026 01:02:39 +0000 (0:00:00.539) 0:00:01.351 ********** 2026-03-23 01:04:31.991257 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-23 01:04:31.991260 | orchestrator | 2026-03-23 01:04:31.991263 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-23 01:04:31.991266 | orchestrator | Monday 23 March 2026 01:02:42 +0000 (0:00:03.642) 0:00:04.994 ********** 2026-03-23 01:04:31.991270 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-23 01:04:31.991273 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-23 01:04:31.991276 | orchestrator | 2026-03-23 01:04:31.991279 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-23 01:04:31.991283 | orchestrator | Monday 23 March 2026 01:02:48 +0000 (0:00:05.637) 0:00:10.631 ********** 2026-03-23 01:04:31.991286 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-23 01:04:31.991289 | orchestrator | 2026-03-23 01:04:31.991292 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-23 01:04:31.991295 | orchestrator | Monday 23 March 2026 01:02:51 +0000 (0:00:03.160) 0:00:13.792 ********** 2026-03-23 01:04:31.991298 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-23 01:04:31.991301 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-23 01:04:31.991304 | orchestrator | 2026-03-23 01:04:31.991307 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-23 01:04:31.991317 | orchestrator | Monday 23 March 2026 01:02:55 +0000 (0:00:03.815) 0:00:17.608 ********** 2026-03-23 01:04:31.991499 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-23 01:04:31.991548 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-23 01:04:31.991556 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-23 01:04:31.991561 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-23 01:04:31.991566 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-23 01:04:31.991572 | orchestrator | 2026-03-23 01:04:31.991577 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-23 01:04:31.991582 | orchestrator | Monday 23 March 2026 01:03:10 +0000 (0:00:15.328) 0:00:32.937 ********** 2026-03-23 01:04:31.991587 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-23 01:04:31.991593 | orchestrator | 2026-03-23 01:04:31.991598 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-23 01:04:31.991603 | orchestrator | Monday 23 March 2026 01:03:14 +0000 (0:00:03.982) 0:00:36.920 ********** 2026-03-23 01:04:31.991611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.991627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.991641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.991646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.991658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.991664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.991675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.991680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.991688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.991694 | orchestrator | 2026-03-23 01:04:31.991699 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-23 01:04:31.991704 | orchestrator | Monday 23 March 2026 01:03:17 +0000 (0:00:02.546) 0:00:39.466 ********** 2026-03-23 01:04:31.991709 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-23 01:04:31.991714 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-23 01:04:31.991723 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-23 01:04:31.991728 | orchestrator | 2026-03-23 01:04:31.991733 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-23 01:04:31.991738 | orchestrator | Monday 23 March 2026 01:03:19 +0000 (0:00:01.983) 0:00:41.450 ********** 2026-03-23 01:04:31.991743 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:04:31.991749 | orchestrator | 2026-03-23 01:04:31.991754 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-23 01:04:31.991759 | orchestrator | Monday 23 March 2026 01:03:19 +0000 (0:00:00.195) 0:00:41.645 ********** 2026-03-23 01:04:31.991764 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:04:31.991769 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:04:31.991774 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:04:31.991779 | orchestrator | 2026-03-23 01:04:31.991784 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-23 01:04:31.991790 | orchestrator | Monday 23 March 2026 01:03:19 +0000 (0:00:00.493) 0:00:42.139 ********** 2026-03-23 01:04:31.991795 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:04:31.991800 | orchestrator | 2026-03-23 01:04:31.991805 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-23 01:04:31.991810 | orchestrator | Monday 23 March 2026 01:03:21 +0000 (0:00:01.356) 0:00:43.495 ********** 2026-03-23 01:04:31.991815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.991825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.991831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.991837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.991840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.991843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.991846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.991853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.991857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.991863 | orchestrator | 2026-03-23 01:04:31.991866 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-23 01:04:31.991869 | orchestrator | Monday 23 March 2026 01:03:24 +0000 (0:00:03.537) 0:00:47.033 ********** 2026-03-23 01:04:31.991874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-23 01:04:31.991877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.991881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.991885 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:04:31.991890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-23 01:04:31.991893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.991900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.991904 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:04:31.991907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-23 01:04:31.991910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.991913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.991917 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:04:31.991920 | orchestrator | 2026-03-23 01:04:31.991923 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-23 01:04:31.991926 | orchestrator | Monday 23 March 2026 01:03:25 +0000 (0:00:00.968) 0:00:48.001 ********** 2026-03-23 01:04:31.991933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-23 01:04:31.991944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.991948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.991951 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:04:31.991954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-23 01:04:31.991957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.991961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.991964 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:04:31.991969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-23 01:04:31.991976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.991980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.991983 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:04:31.991986 | orchestrator | 2026-03-23 01:04:31.991990 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-23 01:04:31.991995 | orchestrator | Monday 23 March 2026 01:03:27 +0000 (0:00:01.419) 0:00:49.421 ********** 2026-03-23 01:04:31.992000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.992007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.992019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.992028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992064 | orchestrator | 2026-03-23 01:04:31.992069 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-23 01:04:31.992074 | orchestrator | Monday 23 March 2026 01:03:31 +0000 (0:00:03.886) 0:00:53.307 ********** 2026-03-23 01:04:31.992079 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:04:31.992084 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:04:31.992089 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:04:31.992094 | orchestrator | 2026-03-23 01:04:31.992099 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-23 01:04:31.992105 | orchestrator | Monday 23 March 2026 01:03:32 +0000 (0:00:01.495) 0:00:54.802 ********** 2026-03-23 01:04:31.992110 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 01:04:31.992113 | orchestrator | 2026-03-23 01:04:31.992117 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-23 01:04:31.992120 | orchestrator | Monday 23 March 2026 01:03:34 +0000 (0:00:02.142) 0:00:56.945 ********** 2026-03-23 01:04:31.992123 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:04:31.992126 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:04:31.992130 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:04:31.992134 | orchestrator | 2026-03-23 01:04:31.992137 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-23 01:04:31.992141 | orchestrator | Monday 23 March 2026 01:03:36 +0000 (0:00:01.409) 0:00:58.355 ********** 2026-03-23 01:04:31.992145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.992149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.992158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.992162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992189 | orchestrator | 2026-03-23 01:04:31.992192 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-23 01:04:31.992196 | orchestrator | Monday 23 March 2026 01:03:45 +0000 (0:00:08.896) 0:01:07.251 ********** 2026-03-23 01:04:31.992202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-23 01:04:31.992208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.992212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.992216 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:04:31.992220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-23 01:04:31.992225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.992231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.992236 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:04:31.992246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-23 01:04:31.992253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.992258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:04:31.992267 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:04:31.992272 | orchestrator | 2026-03-23 01:04:31.992277 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-23 01:04:31.992283 | orchestrator | Monday 23 March 2026 01:03:46 +0000 (0:00:01.275) 0:01:08.527 ********** 2026-03-23 01:04:31.992288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.992297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.992306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-23 01:04:31.992312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:04:31.992366 | orchestrator | 2026-03-23 01:04:31.992371 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-23 01:04:31.992379 | orchestrator | Monday 23 March 2026 01:03:49 +0000 (0:00:02.982) 0:01:11.509 ********** 2026-03-23 01:04:31.992384 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:04:31.992390 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:04:31.992395 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:04:31.992400 | orchestrator | 2026-03-23 01:04:31.992405 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-23 01:04:31.992411 | orchestrator | Monday 23 March 2026 01:03:50 +0000 (0:00:00.803) 0:01:12.313 ********** 2026-03-23 01:04:31.992415 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:04:31.992421 | orchestrator | 2026-03-23 01:04:31.992426 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-23 01:04:31.992431 | orchestrator | Monday 23 March 2026 01:03:52 +0000 (0:00:02.251) 0:01:14.565 ********** 2026-03-23 01:04:31.992437 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:04:31.992444 | orchestrator | 2026-03-23 01:04:31.992449 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-23 01:04:31.992454 | orchestrator | Monday 23 March 2026 01:03:54 +0000 (0:00:01.996) 0:01:16.561 ********** 2026-03-23 01:04:31.992459 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:04:31.992464 | orchestrator | 2026-03-23 01:04:31.992469 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-23 01:04:31.992474 | orchestrator | Monday 23 March 2026 01:04:06 +0000 (0:00:12.172) 0:01:28.734 ********** 2026-03-23 01:04:31.992479 | orchestrator | 2026-03-23 01:04:31.992484 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-23 01:04:31.992489 | orchestrator | Monday 23 March 2026 01:04:06 +0000 (0:00:00.489) 0:01:29.223 ********** 2026-03-23 01:04:31.992494 | orchestrator | 2026-03-23 01:04:31.992499 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-23 01:04:31.992504 | orchestrator | Monday 23 March 2026 01:04:07 +0000 (0:00:00.115) 0:01:29.338 ********** 2026-03-23 01:04:31.992509 | orchestrator | 2026-03-23 01:04:31.992515 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-23 01:04:31.992519 | orchestrator | Monday 23 March 2026 01:04:07 +0000 (0:00:00.081) 0:01:29.420 ********** 2026-03-23 01:04:31.992524 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:04:31.992529 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:04:31.992535 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:04:31.992542 | orchestrator | 2026-03-23 01:04:31.992549 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-23 01:04:31.992555 | orchestrator | Monday 23 March 2026 01:04:15 +0000 (0:00:08.613) 0:01:38.034 ********** 2026-03-23 01:04:31.992560 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:04:31.992565 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:04:31.992571 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:04:31.992575 | orchestrator | 2026-03-23 01:04:31.992581 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-23 01:04:31.992586 | orchestrator | Monday 23 March 2026 01:04:22 +0000 (0:00:06.545) 0:01:44.579 ********** 2026-03-23 01:04:31.992591 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:04:31.992597 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:04:31.992602 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:04:31.992607 | orchestrator | 2026-03-23 01:04:31.992612 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:04:31.992618 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 01:04:31.992624 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-23 01:04:31.992629 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-23 01:04:31.992634 | orchestrator | 2026-03-23 01:04:31.992639 | orchestrator | 2026-03-23 01:04:31.992644 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:04:31.992648 | orchestrator | Monday 23 March 2026 01:04:29 +0000 (0:00:06.827) 0:01:51.407 ********** 2026-03-23 01:04:31.992653 | orchestrator | =============================================================================== 2026-03-23 01:04:31.992659 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.33s 2026-03-23 01:04:31.992668 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.17s 2026-03-23 01:04:31.992674 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.90s 2026-03-23 01:04:31.992680 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.61s 2026-03-23 01:04:31.992685 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.83s 2026-03-23 01:04:31.992690 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.55s 2026-03-23 01:04:31.992700 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 5.64s 2026-03-23 01:04:31.992706 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.98s 2026-03-23 01:04:31.992711 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.89s 2026-03-23 01:04:31.992717 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.82s 2026-03-23 01:04:31.992722 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.64s 2026-03-23 01:04:31.992728 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.54s 2026-03-23 01:04:31.992733 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.16s 2026-03-23 01:04:31.992738 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.98s 2026-03-23 01:04:31.992744 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.55s 2026-03-23 01:04:31.992753 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.26s 2026-03-23 01:04:31.992758 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.14s 2026-03-23 01:04:31.992763 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 1.99s 2026-03-23 01:04:31.992769 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.98s 2026-03-23 01:04:31.992774 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.50s 2026-03-23 01:04:35.011673 | orchestrator | 2026-03-23 01:04:35 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:35.011862 | orchestrator | 2026-03-23 01:04:35 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:35.012896 | orchestrator | 2026-03-23 01:04:35 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:04:35.012988 | orchestrator | 2026-03-23 01:04:35 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:35.013894 | orchestrator | 2026-03-23 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:38.038457 | orchestrator | 2026-03-23 01:04:38 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:38.038711 | orchestrator | 2026-03-23 01:04:38 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:38.039285 | orchestrator | 2026-03-23 01:04:38 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:04:38.041848 | orchestrator | 2026-03-23 01:04:38 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:38.041895 | orchestrator | 2026-03-23 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:41.078754 | orchestrator | 2026-03-23 01:04:41 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:41.079509 | orchestrator | 2026-03-23 01:04:41 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:41.080156 | orchestrator | 2026-03-23 01:04:41 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:04:41.081015 | orchestrator | 2026-03-23 01:04:41 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:41.081134 | orchestrator | 2026-03-23 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:44.116642 | orchestrator | 2026-03-23 01:04:44 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:44.119133 | orchestrator | 2026-03-23 01:04:44 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:44.121211 | orchestrator | 2026-03-23 01:04:44 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:04:44.123226 | orchestrator | 2026-03-23 01:04:44 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:44.123272 | orchestrator | 2026-03-23 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:47.166178 | orchestrator | 2026-03-23 01:04:47 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:47.166231 | orchestrator | 2026-03-23 01:04:47 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:47.166536 | orchestrator | 2026-03-23 01:04:47 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:04:47.168090 | orchestrator | 2026-03-23 01:04:47 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:47.168126 | orchestrator | 2026-03-23 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:50.202180 | orchestrator | 2026-03-23 01:04:50 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:50.203943 | orchestrator | 2026-03-23 01:04:50 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:50.205782 | orchestrator | 2026-03-23 01:04:50 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:04:50.207474 | orchestrator | 2026-03-23 01:04:50 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:50.207539 | orchestrator | 2026-03-23 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:53.258146 | orchestrator | 2026-03-23 01:04:53 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:53.260375 | orchestrator | 2026-03-23 01:04:53 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:53.262059 | orchestrator | 2026-03-23 01:04:53 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:04:53.263859 | orchestrator | 2026-03-23 01:04:53 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:53.263920 | orchestrator | 2026-03-23 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:56.298127 | orchestrator | 2026-03-23 01:04:56 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:56.299870 | orchestrator | 2026-03-23 01:04:56 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:56.301263 | orchestrator | 2026-03-23 01:04:56 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:04:56.302871 | orchestrator | 2026-03-23 01:04:56 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:56.302909 | orchestrator | 2026-03-23 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:04:59.343748 | orchestrator | 2026-03-23 01:04:59 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:04:59.344447 | orchestrator | 2026-03-23 01:04:59 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:04:59.346864 | orchestrator | 2026-03-23 01:04:59 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:04:59.348063 | orchestrator | 2026-03-23 01:04:59 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:04:59.348100 | orchestrator | 2026-03-23 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:02.386010 | orchestrator | 2026-03-23 01:05:02 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:02.388019 | orchestrator | 2026-03-23 01:05:02 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:02.390724 | orchestrator | 2026-03-23 01:05:02 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:02.392314 | orchestrator | 2026-03-23 01:05:02 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:02.392366 | orchestrator | 2026-03-23 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:05.427324 | orchestrator | 2026-03-23 01:05:05 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:05.427983 | orchestrator | 2026-03-23 01:05:05 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:05.430796 | orchestrator | 2026-03-23 01:05:05 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:05.432182 | orchestrator | 2026-03-23 01:05:05 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:05.432228 | orchestrator | 2026-03-23 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:08.479454 | orchestrator | 2026-03-23 01:05:08 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:08.479531 | orchestrator | 2026-03-23 01:05:08 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:08.480853 | orchestrator | 2026-03-23 01:05:08 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:08.480897 | orchestrator | 2026-03-23 01:05:08 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:08.480902 | orchestrator | 2026-03-23 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:11.511739 | orchestrator | 2026-03-23 01:05:11 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:11.514334 | orchestrator | 2026-03-23 01:05:11 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:11.516376 | orchestrator | 2026-03-23 01:05:11 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:11.518485 | orchestrator | 2026-03-23 01:05:11 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:11.518976 | orchestrator | 2026-03-23 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:14.610198 | orchestrator | 2026-03-23 01:05:14 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:14.610738 | orchestrator | 2026-03-23 01:05:14 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:14.611953 | orchestrator | 2026-03-23 01:05:14 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:14.614629 | orchestrator | 2026-03-23 01:05:14 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:14.614670 | orchestrator | 2026-03-23 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:17.644051 | orchestrator | 2026-03-23 01:05:17 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:17.645585 | orchestrator | 2026-03-23 01:05:17 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:17.646568 | orchestrator | 2026-03-23 01:05:17 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:17.647759 | orchestrator | 2026-03-23 01:05:17 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:17.647852 | orchestrator | 2026-03-23 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:20.689641 | orchestrator | 2026-03-23 01:05:20 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:20.690173 | orchestrator | 2026-03-23 01:05:20 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:20.691095 | orchestrator | 2026-03-23 01:05:20 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:20.693072 | orchestrator | 2026-03-23 01:05:20 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:20.693110 | orchestrator | 2026-03-23 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:23.735642 | orchestrator | 2026-03-23 01:05:23 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:23.738275 | orchestrator | 2026-03-23 01:05:23 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:23.739328 | orchestrator | 2026-03-23 01:05:23 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:23.741163 | orchestrator | 2026-03-23 01:05:23 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:23.741239 | orchestrator | 2026-03-23 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:26.766709 | orchestrator | 2026-03-23 01:05:26 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:26.767199 | orchestrator | 2026-03-23 01:05:26 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:26.769594 | orchestrator | 2026-03-23 01:05:26 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:26.770152 | orchestrator | 2026-03-23 01:05:26 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:26.770227 | orchestrator | 2026-03-23 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:29.826579 | orchestrator | 2026-03-23 01:05:29 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:29.826939 | orchestrator | 2026-03-23 01:05:29 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:29.828185 | orchestrator | 2026-03-23 01:05:29 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:29.829703 | orchestrator | 2026-03-23 01:05:29 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:29.829728 | orchestrator | 2026-03-23 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:32.855806 | orchestrator | 2026-03-23 01:05:32 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:32.856748 | orchestrator | 2026-03-23 01:05:32 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:32.857271 | orchestrator | 2026-03-23 01:05:32 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:32.857714 | orchestrator | 2026-03-23 01:05:32 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:32.857733 | orchestrator | 2026-03-23 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:35.891593 | orchestrator | 2026-03-23 01:05:35 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:35.893680 | orchestrator | 2026-03-23 01:05:35 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:35.895393 | orchestrator | 2026-03-23 01:05:35 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:35.896960 | orchestrator | 2026-03-23 01:05:35 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:35.897015 | orchestrator | 2026-03-23 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:38.927020 | orchestrator | 2026-03-23 01:05:38 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:38.927307 | orchestrator | 2026-03-23 01:05:38 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:38.928861 | orchestrator | 2026-03-23 01:05:38 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:38.930584 | orchestrator | 2026-03-23 01:05:38 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:38.930624 | orchestrator | 2026-03-23 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:41.964195 | orchestrator | 2026-03-23 01:05:41 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:41.966309 | orchestrator | 2026-03-23 01:05:41 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:41.967753 | orchestrator | 2026-03-23 01:05:41 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:41.969549 | orchestrator | 2026-03-23 01:05:41 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:41.969596 | orchestrator | 2026-03-23 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:44.998814 | orchestrator | 2026-03-23 01:05:44 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:44.999805 | orchestrator | 2026-03-23 01:05:45 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:45.004777 | orchestrator | 2026-03-23 01:05:45 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:45.004832 | orchestrator | 2026-03-23 01:05:45 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state STARTED 2026-03-23 01:05:45.004839 | orchestrator | 2026-03-23 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:48.035215 | orchestrator | 2026-03-23 01:05:48 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:48.036337 | orchestrator | 2026-03-23 01:05:48 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:48.037846 | orchestrator | 2026-03-23 01:05:48 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:48.040350 | orchestrator | 2026-03-23 01:05:48 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:05:48.045257 | orchestrator | 2026-03-23 01:05:48 | INFO  | Task 15e38b09-48ca-4076-9627-fca25c956038 is in state SUCCESS 2026-03-23 01:05:48.046117 | orchestrator | 2026-03-23 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:48.047039 | orchestrator | 2026-03-23 01:05:48.047073 | orchestrator | 2026-03-23 01:05:48.047079 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:05:48.047083 | orchestrator | 2026-03-23 01:05:48.047086 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:05:48.047089 | orchestrator | Monday 23 March 2026 01:02:53 +0000 (0:00:00.270) 0:00:00.270 ********** 2026-03-23 01:05:48.047093 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:05:48.047096 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:05:48.047100 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:05:48.047103 | orchestrator | 2026-03-23 01:05:48.047106 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:05:48.047131 | orchestrator | Monday 23 March 2026 01:02:53 +0000 (0:00:00.251) 0:00:00.521 ********** 2026-03-23 01:05:48.047135 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-23 01:05:48.047139 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-23 01:05:48.047142 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-23 01:05:48.047145 | orchestrator | 2026-03-23 01:05:48.047148 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-23 01:05:48.047151 | orchestrator | 2026-03-23 01:05:48.047155 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-23 01:05:48.047158 | orchestrator | Monday 23 March 2026 01:02:54 +0000 (0:00:00.258) 0:00:00.780 ********** 2026-03-23 01:05:48.047162 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:05:48.047165 | orchestrator | 2026-03-23 01:05:48.047169 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-23 01:05:48.047172 | orchestrator | Monday 23 March 2026 01:02:54 +0000 (0:00:00.532) 0:00:01.312 ********** 2026-03-23 01:05:48.047175 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-23 01:05:48.047178 | orchestrator | 2026-03-23 01:05:48.047181 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-23 01:05:48.047185 | orchestrator | Monday 23 March 2026 01:02:58 +0000 (0:00:03.716) 0:00:05.029 ********** 2026-03-23 01:05:48.047194 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-23 01:05:48.047198 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-23 01:05:48.047201 | orchestrator | 2026-03-23 01:05:48.047229 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-23 01:05:48.047233 | orchestrator | Monday 23 March 2026 01:03:04 +0000 (0:00:06.197) 0:00:11.227 ********** 2026-03-23 01:05:48.047236 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-23 01:05:48.047240 | orchestrator | 2026-03-23 01:05:48.047243 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-23 01:05:48.047246 | orchestrator | Monday 23 March 2026 01:03:07 +0000 (0:00:03.271) 0:00:14.499 ********** 2026-03-23 01:05:48.047250 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-23 01:05:48.047256 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-23 01:05:48.047261 | orchestrator | 2026-03-23 01:05:48.047285 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-23 01:05:48.047292 | orchestrator | Monday 23 March 2026 01:03:11 +0000 (0:00:03.660) 0:00:18.159 ********** 2026-03-23 01:05:48.047297 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-23 01:05:48.047303 | orchestrator | 2026-03-23 01:05:48.047308 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-23 01:05:48.047312 | orchestrator | Monday 23 March 2026 01:03:14 +0000 (0:00:03.451) 0:00:21.610 ********** 2026-03-23 01:05:48.047317 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-23 01:05:48.047321 | orchestrator | 2026-03-23 01:05:48.047327 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-23 01:05:48.047332 | orchestrator | Monday 23 March 2026 01:03:19 +0000 (0:00:04.114) 0:00:25.724 ********** 2026-03-23 01:05:48.047349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.047401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.047410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.047421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047516 | orchestrator | 2026-03-23 01:05:48.047535 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-23 01:05:48.047541 | orchestrator | Monday 23 March 2026 01:03:23 +0000 (0:00:04.570) 0:00:30.295 ********** 2026-03-23 01:05:48.047546 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:05:48.047551 | orchestrator | 2026-03-23 01:05:48.047557 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-23 01:05:48.047563 | orchestrator | Monday 23 March 2026 01:03:23 +0000 (0:00:00.214) 0:00:30.510 ********** 2026-03-23 01:05:48.047568 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:05:48.047592 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:05:48.047601 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:05:48.047606 | orchestrator | 2026-03-23 01:05:48.047612 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-23 01:05:48.047617 | orchestrator | Monday 23 March 2026 01:03:24 +0000 (0:00:00.321) 0:00:30.831 ********** 2026-03-23 01:05:48.047622 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:05:48.047628 | orchestrator | 2026-03-23 01:05:48.047633 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-23 01:05:48.047639 | orchestrator | Monday 23 March 2026 01:03:24 +0000 (0:00:00.674) 0:00:31.505 ********** 2026-03-23 01:05:48.047643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.047650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.047657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.047662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.047737 | orchestrator | 2026-03-23 01:05:48.047741 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-23 01:05:48.047745 | orchestrator | Monday 23 March 2026 01:03:31 +0000 (0:00:06.826) 0:00:38.332 ********** 2026-03-23 01:05:48.047749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.047753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 01:05:48.047760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.047764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.047769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.047777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.047781 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:05:48.047785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.047788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 01:05:48.048107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048144 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:05:48.048148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.048151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 01:05:48.048160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048177 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:05:48.048180 | orchestrator | 2026-03-23 01:05:48.048183 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-23 01:05:48.048187 | orchestrator | Monday 23 March 2026 01:03:32 +0000 (0:00:00.746) 0:00:39.078 ********** 2026-03-23 01:05:48.048190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.048193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 01:05:48.048199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048244 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:05:48.048254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.048259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 01:05:48.048266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048284 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:05:48.048287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.048291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 01:05:48.048294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048314 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:05:48.048317 | orchestrator | 2026-03-23 01:05:48.048321 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-23 01:05:48.048326 | orchestrator | Monday 23 March 2026 01:03:34 +0000 (0:00:02.180) 0:00:41.259 ********** 2026-03-23 01:05:48.048331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.048336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.048344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.048352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048452 | orchestrator | 2026-03-23 01:05:48.048457 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-23 01:05:48.048462 | orchestrator | Monday 23 March 2026 01:03:42 +0000 (0:00:08.391) 0:00:49.650 ********** 2026-03-23 01:05:48.048470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.048476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.048481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.048493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048588 | orchestrator | 2026-03-23 01:05:48.048593 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-23 01:05:48.048598 | orchestrator | Monday 23 March 2026 01:04:02 +0000 (0:00:19.120) 0:01:08.771 ********** 2026-03-23 01:05:48.048602 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-23 01:05:48.048607 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-23 01:05:48.048612 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-23 01:05:48.048617 | orchestrator | 2026-03-23 01:05:48.048621 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-23 01:05:48.048626 | orchestrator | Monday 23 March 2026 01:04:08 +0000 (0:00:05.987) 0:01:14.759 ********** 2026-03-23 01:05:48.048631 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-23 01:05:48.048637 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-23 01:05:48.048643 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-23 01:05:48.048647 | orchestrator | 2026-03-23 01:05:48.048651 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-23 01:05:48.048656 | orchestrator | Monday 23 March 2026 01:04:12 +0000 (0:00:04.166) 0:01:18.926 ********** 2026-03-23 01:05:48.048661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.048666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.048680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.048713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048788 | orchestrator | 2026-03-23 01:05:48.048793 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-23 01:05:48.048799 | orchestrator | Monday 23 March 2026 01:04:15 +0000 (0:00:03.748) 0:01:22.674 ********** 2026-03-23 01:05:48.048810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.048816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.048826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.048839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.048907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.048929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049624 | orchestrator | 2026-03-23 01:05:48.049629 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-23 01:05:48.049634 | orchestrator | Monday 23 March 2026 01:04:20 +0000 (0:00:04.326) 0:01:27.001 ********** 2026-03-23 01:05:48.049639 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:05:48.049644 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:05:48.049649 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:05:48.049654 | orchestrator | 2026-03-23 01:05:48.049659 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-23 01:05:48.049664 | orchestrator | Monday 23 March 2026 01:04:21 +0000 (0:00:00.697) 0:01:27.699 ********** 2026-03-23 01:05:48.049670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.049687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 01:05:48.049693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.049698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.049739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.049747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.049754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.049763 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:05:48.049769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 01:05:48.049774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.049779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.049789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.049795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.049800 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:05:48.049808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-23 01:05:48.049817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-23 01:05:48.049822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.049827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.049832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.049840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:05:48.049845 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:05:48.049851 | orchestrator | 2026-03-23 01:05:48.049856 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-23 01:05:48.049860 | orchestrator | Monday 23 March 2026 01:04:23 +0000 (0:00:02.428) 0:01:30.127 ********** 2026-03-23 01:05:48.049879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.049885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.049890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-23 01:05:48.049896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.049995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:05:48.050001 | orchestrator | 2026-03-23 01:05:48.050006 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-23 01:05:48.050061 | orchestrator | Monday 23 March 2026 01:04:28 +0000 (0:00:05.327) 0:01:35.455 ********** 2026-03-23 01:05:48.050070 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:05:48.050076 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:05:48.050081 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:05:48.050086 | orchestrator | 2026-03-23 01:05:48.050091 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-23 01:05:48.050097 | orchestrator | Monday 23 March 2026 01:04:29 +0000 (0:00:00.340) 0:01:35.796 ********** 2026-03-23 01:05:48.050103 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-23 01:05:48.050109 | orchestrator | 2026-03-23 01:05:48.050114 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-23 01:05:48.050119 | orchestrator | Monday 23 March 2026 01:04:31 +0000 (0:00:01.953) 0:01:37.750 ********** 2026-03-23 01:05:48.050125 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-23 01:05:48.050131 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-23 01:05:48.050137 | orchestrator | 2026-03-23 01:05:48.050143 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-23 01:05:48.050148 | orchestrator | Monday 23 March 2026 01:04:33 +0000 (0:00:02.218) 0:01:39.969 ********** 2026-03-23 01:05:48.050154 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:05:48.050160 | orchestrator | 2026-03-23 01:05:48.050167 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-23 01:05:48.050174 | orchestrator | Monday 23 March 2026 01:04:46 +0000 (0:00:12.943) 0:01:52.913 ********** 2026-03-23 01:05:48.050180 | orchestrator | 2026-03-23 01:05:48.050186 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-23 01:05:48.050192 | orchestrator | Monday 23 March 2026 01:04:46 +0000 (0:00:00.067) 0:01:52.980 ********** 2026-03-23 01:05:48.050198 | orchestrator | 2026-03-23 01:05:48.050246 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-23 01:05:48.050255 | orchestrator | Monday 23 March 2026 01:04:46 +0000 (0:00:00.067) 0:01:53.048 ********** 2026-03-23 01:05:48.050261 | orchestrator | 2026-03-23 01:05:48.050266 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-23 01:05:48.050272 | orchestrator | Monday 23 March 2026 01:04:46 +0000 (0:00:00.068) 0:01:53.116 ********** 2026-03-23 01:05:48.050277 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:05:48.050283 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:05:48.050288 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:05:48.050293 | orchestrator | 2026-03-23 01:05:48.050299 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-23 01:05:48.050304 | orchestrator | Monday 23 March 2026 01:04:58 +0000 (0:00:12.535) 0:02:05.651 ********** 2026-03-23 01:05:48.050311 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:05:48.050316 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:05:48.050321 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:05:48.050326 | orchestrator | 2026-03-23 01:05:48.050332 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-23 01:05:48.050337 | orchestrator | Monday 23 March 2026 01:05:08 +0000 (0:00:09.243) 0:02:14.895 ********** 2026-03-23 01:05:48.050342 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:05:48.050348 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:05:48.050354 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:05:48.050359 | orchestrator | 2026-03-23 01:05:48.050364 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-23 01:05:48.050369 | orchestrator | Monday 23 March 2026 01:05:13 +0000 (0:00:05.623) 0:02:20.518 ********** 2026-03-23 01:05:48.050375 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:05:48.050380 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:05:48.050386 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:05:48.050391 | orchestrator | 2026-03-23 01:05:48.050397 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-23 01:05:48.050408 | orchestrator | Monday 23 March 2026 01:05:24 +0000 (0:00:10.519) 0:02:31.038 ********** 2026-03-23 01:05:48.050413 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:05:48.050419 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:05:48.050424 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:05:48.050429 | orchestrator | 2026-03-23 01:05:48.050434 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-23 01:05:48.050440 | orchestrator | Monday 23 March 2026 01:05:29 +0000 (0:00:05.365) 0:02:36.404 ********** 2026-03-23 01:05:48.050446 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:05:48.050451 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:05:48.050456 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:05:48.050460 | orchestrator | 2026-03-23 01:05:48.050465 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-23 01:05:48.050469 | orchestrator | Monday 23 March 2026 01:05:37 +0000 (0:00:07.441) 0:02:43.845 ********** 2026-03-23 01:05:48.050475 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:05:48.050480 | orchestrator | 2026-03-23 01:05:48.050486 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:05:48.050491 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 01:05:48.050498 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-23 01:05:48.050503 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-23 01:05:48.050507 | orchestrator | 2026-03-23 01:05:48.050512 | orchestrator | 2026-03-23 01:05:48.050525 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:05:48.050531 | orchestrator | Monday 23 March 2026 01:05:45 +0000 (0:00:07.848) 0:02:51.694 ********** 2026-03-23 01:05:48.050536 | orchestrator | =============================================================================== 2026-03-23 01:05:48.050541 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.12s 2026-03-23 01:05:48.050546 | orchestrator | designate : Running Designate bootstrap container ---------------------- 12.94s 2026-03-23 01:05:48.050551 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.54s 2026-03-23 01:05:48.050556 | orchestrator | designate : Restart designate-producer container ----------------------- 10.52s 2026-03-23 01:05:48.050562 | orchestrator | designate : Restart designate-api container ----------------------------- 9.24s 2026-03-23 01:05:48.050567 | orchestrator | designate : Copying over config.json files for services ----------------- 8.39s 2026-03-23 01:05:48.050572 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.85s 2026-03-23 01:05:48.050577 | orchestrator | designate : Restart designate-worker container -------------------------- 7.44s 2026-03-23 01:05:48.050582 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.83s 2026-03-23 01:05:48.050587 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.20s 2026-03-23 01:05:48.050592 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.99s 2026-03-23 01:05:48.050597 | orchestrator | designate : Restart designate-central container ------------------------- 5.62s 2026-03-23 01:05:48.050602 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.37s 2026-03-23 01:05:48.050608 | orchestrator | designate : Check designate containers ---------------------------------- 5.33s 2026-03-23 01:05:48.050613 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.57s 2026-03-23 01:05:48.050619 | orchestrator | designate : Copying over rndc.key --------------------------------------- 4.33s 2026-03-23 01:05:48.050628 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.17s 2026-03-23 01:05:48.050634 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.11s 2026-03-23 01:05:48.050645 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.75s 2026-03-23 01:05:48.050650 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.72s 2026-03-23 01:05:51.092506 | orchestrator | 2026-03-23 01:05:51 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:51.093020 | orchestrator | 2026-03-23 01:05:51 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:51.093634 | orchestrator | 2026-03-23 01:05:51 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:51.094436 | orchestrator | 2026-03-23 01:05:51 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:05:51.094456 | orchestrator | 2026-03-23 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:54.120798 | orchestrator | 2026-03-23 01:05:54 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:54.121673 | orchestrator | 2026-03-23 01:05:54 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:54.121916 | orchestrator | 2026-03-23 01:05:54 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:54.122598 | orchestrator | 2026-03-23 01:05:54 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:05:54.122835 | orchestrator | 2026-03-23 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:05:57.148632 | orchestrator | 2026-03-23 01:05:57 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:05:57.151500 | orchestrator | 2026-03-23 01:05:57 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:05:57.152012 | orchestrator | 2026-03-23 01:05:57 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:05:57.152816 | orchestrator | 2026-03-23 01:05:57 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:05:57.152847 | orchestrator | 2026-03-23 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:00.229364 | orchestrator | 2026-03-23 01:06:00 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:00.229887 | orchestrator | 2026-03-23 01:06:00 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:00.230438 | orchestrator | 2026-03-23 01:06:00 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:06:00.231196 | orchestrator | 2026-03-23 01:06:00 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:00.231224 | orchestrator | 2026-03-23 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:03.257129 | orchestrator | 2026-03-23 01:06:03 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:03.257341 | orchestrator | 2026-03-23 01:06:03 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:03.258491 | orchestrator | 2026-03-23 01:06:03 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:06:03.259335 | orchestrator | 2026-03-23 01:06:03 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:03.259495 | orchestrator | 2026-03-23 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:06.300642 | orchestrator | 2026-03-23 01:06:06 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:06.303163 | orchestrator | 2026-03-23 01:06:06 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:06.303574 | orchestrator | 2026-03-23 01:06:06 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:06:06.305334 | orchestrator | 2026-03-23 01:06:06 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:06.305381 | orchestrator | 2026-03-23 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:09.333384 | orchestrator | 2026-03-23 01:06:09 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:09.333444 | orchestrator | 2026-03-23 01:06:09 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:09.333505 | orchestrator | 2026-03-23 01:06:09 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:06:09.336128 | orchestrator | 2026-03-23 01:06:09 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:09.336196 | orchestrator | 2026-03-23 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:12.379039 | orchestrator | 2026-03-23 01:06:12 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:12.380120 | orchestrator | 2026-03-23 01:06:12 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:12.381622 | orchestrator | 2026-03-23 01:06:12 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:06:12.383023 | orchestrator | 2026-03-23 01:06:12 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:12.383056 | orchestrator | 2026-03-23 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:15.424275 | orchestrator | 2026-03-23 01:06:15 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:15.424800 | orchestrator | 2026-03-23 01:06:15 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:15.425657 | orchestrator | 2026-03-23 01:06:15 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:06:15.426383 | orchestrator | 2026-03-23 01:06:15 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:15.426410 | orchestrator | 2026-03-23 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:18.475570 | orchestrator | 2026-03-23 01:06:18 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:18.478877 | orchestrator | 2026-03-23 01:06:18 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:18.479617 | orchestrator | 2026-03-23 01:06:18 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:06:18.480872 | orchestrator | 2026-03-23 01:06:18 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:18.481032 | orchestrator | 2026-03-23 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:21.526785 | orchestrator | 2026-03-23 01:06:21 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:21.529281 | orchestrator | 2026-03-23 01:06:21 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:21.531520 | orchestrator | 2026-03-23 01:06:21 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:06:21.533837 | orchestrator | 2026-03-23 01:06:21 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:21.534164 | orchestrator | 2026-03-23 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:24.568039 | orchestrator | 2026-03-23 01:06:24 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:24.568100 | orchestrator | 2026-03-23 01:06:24 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:24.568674 | orchestrator | 2026-03-23 01:06:24 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:06:24.569290 | orchestrator | 2026-03-23 01:06:24 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:24.569359 | orchestrator | 2026-03-23 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:27.599030 | orchestrator | 2026-03-23 01:06:27 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:27.599429 | orchestrator | 2026-03-23 01:06:27 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:27.600165 | orchestrator | 2026-03-23 01:06:27 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state STARTED 2026-03-23 01:06:27.600891 | orchestrator | 2026-03-23 01:06:27 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:27.600917 | orchestrator | 2026-03-23 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:30.639430 | orchestrator | 2026-03-23 01:06:30 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:30.641052 | orchestrator | 2026-03-23 01:06:30 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:30.642538 | orchestrator | 2026-03-23 01:06:30 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:06:30.643462 | orchestrator | 2026-03-23 01:06:30 | INFO  | Task b60688cc-0bde-426b-b65c-237dd6ac9646 is in state SUCCESS 2026-03-23 01:06:30.644901 | orchestrator | 2026-03-23 01:06:30 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:30.645252 | orchestrator | 2026-03-23 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:33.680030 | orchestrator | 2026-03-23 01:06:33 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:33.682528 | orchestrator | 2026-03-23 01:06:33 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:33.684838 | orchestrator | 2026-03-23 01:06:33 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:06:33.686655 | orchestrator | 2026-03-23 01:06:33 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:33.686709 | orchestrator | 2026-03-23 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:36.729008 | orchestrator | 2026-03-23 01:06:36 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:36.731864 | orchestrator | 2026-03-23 01:06:36 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:36.734718 | orchestrator | 2026-03-23 01:06:36 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:06:36.736803 | orchestrator | 2026-03-23 01:06:36 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:36.737028 | orchestrator | 2026-03-23 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:39.780325 | orchestrator | 2026-03-23 01:06:39 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:39.781570 | orchestrator | 2026-03-23 01:06:39 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:39.783235 | orchestrator | 2026-03-23 01:06:39 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:06:39.784379 | orchestrator | 2026-03-23 01:06:39 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:39.784406 | orchestrator | 2026-03-23 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:42.816525 | orchestrator | 2026-03-23 01:06:42 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:42.816584 | orchestrator | 2026-03-23 01:06:42 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state STARTED 2026-03-23 01:06:42.818262 | orchestrator | 2026-03-23 01:06:42 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:06:42.819305 | orchestrator | 2026-03-23 01:06:42 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:42.821060 | orchestrator | 2026-03-23 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:45.852283 | orchestrator | 2026-03-23 01:06:45 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:45.857273 | orchestrator | 2026-03-23 01:06:45 | INFO  | Task da0cb62a-f5c2-4021-b2b3-87ba4584d8da is in state SUCCESS 2026-03-23 01:06:45.859390 | orchestrator | 2026-03-23 01:06:45.859452 | orchestrator | 2026-03-23 01:06:45.859459 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-23 01:06:45.859465 | orchestrator | 2026-03-23 01:06:45.859470 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-23 01:06:45.859476 | orchestrator | Monday 23 March 2026 01:04:32 +0000 (0:00:00.074) 0:00:00.074 ********** 2026-03-23 01:06:45.859481 | orchestrator | changed: [localhost] 2026-03-23 01:06:45.859487 | orchestrator | 2026-03-23 01:06:45.859493 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-23 01:06:45.859498 | orchestrator | Monday 23 March 2026 01:04:34 +0000 (0:00:01.241) 0:00:01.315 ********** 2026-03-23 01:06:45.859502 | orchestrator | changed: [localhost] 2026-03-23 01:06:45.859533 | orchestrator | 2026-03-23 01:06:45.859538 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-23 01:06:45.859543 | orchestrator | Monday 23 March 2026 01:05:17 +0000 (0:00:43.665) 0:00:44.980 ********** 2026-03-23 01:06:45.859549 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-03-23 01:06:45.859555 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-03-23 01:06:45.859559 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (1 retries left). 2026-03-23 01:06:45.859564 | orchestrator | changed: [localhost] 2026-03-23 01:06:45.859570 | orchestrator | 2026-03-23 01:06:45.859575 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:06:45.859580 | orchestrator | 2026-03-23 01:06:45.859586 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:06:45.859591 | orchestrator | Monday 23 March 2026 01:06:27 +0000 (0:01:09.415) 0:01:54.396 ********** 2026-03-23 01:06:45.859596 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:06:45.859601 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:06:45.859607 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:06:45.859613 | orchestrator | 2026-03-23 01:06:45.859618 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:06:45.859623 | orchestrator | Monday 23 March 2026 01:06:27 +0000 (0:00:00.496) 0:01:54.892 ********** 2026-03-23 01:06:45.859663 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-23 01:06:45.859672 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-23 01:06:45.859677 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-23 01:06:45.859683 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-23 01:06:45.859707 | orchestrator | 2026-03-23 01:06:45.859713 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-23 01:06:45.859718 | orchestrator | skipping: no hosts matched 2026-03-23 01:06:45.859724 | orchestrator | 2026-03-23 01:06:45.859730 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:06:45.859735 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:06:45.859742 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:06:45.859749 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:06:45.859753 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:06:45.859758 | orchestrator | 2026-03-23 01:06:45.859762 | orchestrator | 2026-03-23 01:06:45.859767 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:06:45.859772 | orchestrator | Monday 23 March 2026 01:06:28 +0000 (0:00:00.587) 0:01:55.479 ********** 2026-03-23 01:06:45.859778 | orchestrator | =============================================================================== 2026-03-23 01:06:45.859783 | orchestrator | Download ironic-agent kernel ------------------------------------------- 69.42s 2026-03-23 01:06:45.859788 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 43.67s 2026-03-23 01:06:45.859794 | orchestrator | Ensure the destination directory exists --------------------------------- 1.24s 2026-03-23 01:06:45.859799 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2026-03-23 01:06:45.859805 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.50s 2026-03-23 01:06:45.859811 | orchestrator | 2026-03-23 01:06:45.859816 | orchestrator | 2026-03-23 01:06:45.859822 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:06:45.859827 | orchestrator | 2026-03-23 01:06:45.859833 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:06:45.859838 | orchestrator | Monday 23 March 2026 01:02:32 +0000 (0:00:00.279) 0:00:00.279 ********** 2026-03-23 01:06:45.859843 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:06:45.859849 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:06:45.859854 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:06:45.859859 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:06:45.859865 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:06:45.859870 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:06:45.859875 | orchestrator | 2026-03-23 01:06:45.859880 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:06:45.859886 | orchestrator | Monday 23 March 2026 01:02:33 +0000 (0:00:00.514) 0:00:00.794 ********** 2026-03-23 01:06:45.859891 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-23 01:06:45.859897 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-23 01:06:45.859902 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-23 01:06:45.859908 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-23 01:06:45.859991 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-23 01:06:45.860056 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-23 01:06:45.860063 | orchestrator | 2026-03-23 01:06:45.860069 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-23 01:06:45.860075 | orchestrator | 2026-03-23 01:06:45.860081 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-23 01:06:45.860086 | orchestrator | Monday 23 March 2026 01:02:33 +0000 (0:00:00.605) 0:00:01.400 ********** 2026-03-23 01:06:45.860092 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 01:06:45.860104 | orchestrator | 2026-03-23 01:06:45.860109 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-23 01:06:45.860127 | orchestrator | Monday 23 March 2026 01:02:34 +0000 (0:00:00.971) 0:00:02.372 ********** 2026-03-23 01:06:45.860133 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:06:45.860138 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:06:45.860143 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:06:45.860148 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:06:45.860153 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:06:45.860158 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:06:45.860164 | orchestrator | 2026-03-23 01:06:45.860169 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-23 01:06:45.860174 | orchestrator | Monday 23 March 2026 01:02:36 +0000 (0:00:01.294) 0:00:03.667 ********** 2026-03-23 01:06:45.860180 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:06:45.860185 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:06:45.860191 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:06:45.860196 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:06:45.860202 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:06:45.860208 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:06:45.860213 | orchestrator | 2026-03-23 01:06:45.860219 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-23 01:06:45.860224 | orchestrator | Monday 23 March 2026 01:02:37 +0000 (0:00:01.098) 0:00:04.765 ********** 2026-03-23 01:06:45.860230 | orchestrator | ok: [testbed-node-0] => { 2026-03-23 01:06:45.860235 | orchestrator |  "changed": false, 2026-03-23 01:06:45.860240 | orchestrator |  "msg": "All assertions passed" 2026-03-23 01:06:45.860245 | orchestrator | } 2026-03-23 01:06:45.860250 | orchestrator | ok: [testbed-node-1] => { 2026-03-23 01:06:45.860255 | orchestrator |  "changed": false, 2026-03-23 01:06:45.860259 | orchestrator |  "msg": "All assertions passed" 2026-03-23 01:06:45.860264 | orchestrator | } 2026-03-23 01:06:45.860269 | orchestrator | ok: [testbed-node-2] => { 2026-03-23 01:06:45.860274 | orchestrator |  "changed": false, 2026-03-23 01:06:45.860279 | orchestrator |  "msg": "All assertions passed" 2026-03-23 01:06:45.860284 | orchestrator | } 2026-03-23 01:06:45.860289 | orchestrator | ok: [testbed-node-3] => { 2026-03-23 01:06:45.860294 | orchestrator |  "changed": false, 2026-03-23 01:06:45.860299 | orchestrator |  "msg": "All assertions passed" 2026-03-23 01:06:45.860304 | orchestrator | } 2026-03-23 01:06:45.860310 | orchestrator | ok: [testbed-node-4] => { 2026-03-23 01:06:45.860315 | orchestrator |  "changed": false, 2026-03-23 01:06:45.860321 | orchestrator |  "msg": "All assertions passed" 2026-03-23 01:06:45.860326 | orchestrator | } 2026-03-23 01:06:45.860332 | orchestrator | ok: [testbed-node-5] => { 2026-03-23 01:06:45.860337 | orchestrator |  "changed": false, 2026-03-23 01:06:45.860342 | orchestrator |  "msg": "All assertions passed" 2026-03-23 01:06:45.860347 | orchestrator | } 2026-03-23 01:06:45.860353 | orchestrator | 2026-03-23 01:06:45.860359 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-23 01:06:45.860364 | orchestrator | Monday 23 March 2026 01:02:37 +0000 (0:00:00.515) 0:00:05.281 ********** 2026-03-23 01:06:45.860370 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.860375 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.860381 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.860386 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.860391 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.860397 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.860402 | orchestrator | 2026-03-23 01:06:45.860407 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-23 01:06:45.860413 | orchestrator | Monday 23 March 2026 01:02:38 +0000 (0:00:00.641) 0:00:05.922 ********** 2026-03-23 01:06:45.860418 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-23 01:06:45.860423 | orchestrator | 2026-03-23 01:06:45.860428 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-23 01:06:45.860439 | orchestrator | Monday 23 March 2026 01:02:41 +0000 (0:00:02.979) 0:00:08.901 ********** 2026-03-23 01:06:45.860445 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-23 01:06:45.860451 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-23 01:06:45.860456 | orchestrator | 2026-03-23 01:06:45.860461 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-23 01:06:45.860467 | orchestrator | Monday 23 March 2026 01:02:46 +0000 (0:00:05.701) 0:00:14.602 ********** 2026-03-23 01:06:45.860472 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-23 01:06:45.860477 | orchestrator | 2026-03-23 01:06:45.860482 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-23 01:06:45.860487 | orchestrator | Monday 23 March 2026 01:02:50 +0000 (0:00:03.196) 0:00:17.799 ********** 2026-03-23 01:06:45.860492 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-23 01:06:45.860498 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-23 01:06:45.860503 | orchestrator | 2026-03-23 01:06:45.860508 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-23 01:06:45.860513 | orchestrator | Monday 23 March 2026 01:02:54 +0000 (0:00:03.914) 0:00:21.713 ********** 2026-03-23 01:06:45.860518 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-23 01:06:45.860524 | orchestrator | 2026-03-23 01:06:45.860529 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-23 01:06:45.860534 | orchestrator | Monday 23 March 2026 01:02:57 +0000 (0:00:03.288) 0:00:25.002 ********** 2026-03-23 01:06:45.860539 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-23 01:06:45.860544 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-23 01:06:45.860549 | orchestrator | 2026-03-23 01:06:45.860560 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-23 01:06:45.860566 | orchestrator | Monday 23 March 2026 01:03:04 +0000 (0:00:07.068) 0:00:32.071 ********** 2026-03-23 01:06:45.860571 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.860576 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.860581 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.860587 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.860592 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.860597 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.860602 | orchestrator | 2026-03-23 01:06:45.860607 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-23 01:06:45.860612 | orchestrator | Monday 23 March 2026 01:03:04 +0000 (0:00:00.485) 0:00:32.556 ********** 2026-03-23 01:06:45.860617 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.860623 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.860628 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.860633 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.860638 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.860643 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.860648 | orchestrator | 2026-03-23 01:06:45.860653 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-23 01:06:45.860658 | orchestrator | Monday 23 March 2026 01:03:06 +0000 (0:00:01.927) 0:00:34.483 ********** 2026-03-23 01:06:45.860664 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:06:45.860669 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:06:45.860674 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:06:45.860679 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:06:45.860684 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:06:45.860689 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:06:45.860695 | orchestrator | 2026-03-23 01:06:45.860700 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-23 01:06:45.860706 | orchestrator | Monday 23 March 2026 01:03:07 +0000 (0:00:01.038) 0:00:35.522 ********** 2026-03-23 01:06:45.860715 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.860720 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.860725 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.860730 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.860735 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.860740 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.860746 | orchestrator | 2026-03-23 01:06:45.860751 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-23 01:06:45.860755 | orchestrator | Monday 23 March 2026 01:03:09 +0000 (0:00:01.827) 0:00:37.350 ********** 2026-03-23 01:06:45.860764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.860772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.860783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.860789 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.860799 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.860804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.860810 | orchestrator | 2026-03-23 01:06:45.860815 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-23 01:06:45.860820 | orchestrator | Monday 23 March 2026 01:03:11 +0000 (0:00:02.141) 0:00:39.492 ********** 2026-03-23 01:06:45.860826 | orchestrator | [WARNING]: Skipped 2026-03-23 01:06:45.860831 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-23 01:06:45.860837 | orchestrator | due to this access issue: 2026-03-23 01:06:45.860842 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-23 01:06:45.860847 | orchestrator | a directory 2026-03-23 01:06:45.860852 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 01:06:45.860857 | orchestrator | 2026-03-23 01:06:45.860863 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-23 01:06:45.860868 | orchestrator | Monday 23 March 2026 01:03:12 +0000 (0:00:00.799) 0:00:40.291 ********** 2026-03-23 01:06:45.860873 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 01:06:45.860880 | orchestrator | 2026-03-23 01:06:45.860885 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-23 01:06:45.860890 | orchestrator | Monday 23 March 2026 01:03:13 +0000 (0:00:01.011) 0:00:41.303 ********** 2026-03-23 01:06:45.860898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.860904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.860913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.860919 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.860924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.860933 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.860938 | orchestrator | 2026-03-23 01:06:45.860948 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-23 01:06:45.860953 | orchestrator | Monday 23 March 2026 01:03:16 +0000 (0:00:03.215) 0:00:44.519 ********** 2026-03-23 01:06:45.860958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.860964 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.860969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.860975 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.860980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.860986 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.860992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.860997 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.861006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.861015 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.861021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.861026 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.861031 | orchestrator | 2026-03-23 01:06:45.861037 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-23 01:06:45.861042 | orchestrator | Monday 23 March 2026 01:03:19 +0000 (0:00:02.684) 0:00:47.204 ********** 2026-03-23 01:06:45.861047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.861052 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.861058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.861064 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.861072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.861081 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.861086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.861091 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.861097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.861102 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.861107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.861127 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.861133 | orchestrator | 2026-03-23 01:06:45.861138 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-23 01:06:45.861144 | orchestrator | Monday 23 March 2026 01:03:22 +0000 (0:00:03.168) 0:00:50.372 ********** 2026-03-23 01:06:45.861149 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.861154 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.861159 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.861164 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.861170 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.861175 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.861180 | orchestrator | 2026-03-23 01:06:45.861185 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-23 01:06:45.861191 | orchestrator | Monday 23 March 2026 01:03:24 +0000 (0:00:02.244) 0:00:52.617 ********** 2026-03-23 01:06:45.861199 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.861204 | orchestrator | 2026-03-23 01:06:45.861209 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-23 01:06:45.861214 | orchestrator | Monday 23 March 2026 01:03:25 +0000 (0:00:00.490) 0:00:53.108 ********** 2026-03-23 01:06:45.861219 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.861225 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.861230 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.861235 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.861240 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.861245 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.861250 | orchestrator | 2026-03-23 01:06:45.861315 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-23 01:06:45.861324 | orchestrator | Monday 23 March 2026 01:03:26 +0000 (0:00:00.725) 0:00:53.834 ********** 2026-03-23 01:06:45.861660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.861679 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.861684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.861687 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.861691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.861695 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.861698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.861707 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.861711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.861714 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.861721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.861725 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.861728 | orchestrator | 2026-03-23 01:06:45.861731 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-23 01:06:45.861734 | orchestrator | Monday 23 March 2026 01:03:29 +0000 (0:00:02.960) 0:00:56.794 ********** 2026-03-23 01:06:45.861738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.861742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.861749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.861754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.861758 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.861761 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.861764 | orchestrator | 2026-03-23 01:06:45.861768 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-23 01:06:45.861771 | orchestrator | Monday 23 March 2026 01:03:32 +0000 (0:00:03.079) 0:00:59.874 ********** 2026-03-23 01:06:45.861774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.861780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.861787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.861790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.861793 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.861799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.861802 | orchestrator | 2026-03-23 01:06:45.861805 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-23 01:06:45.861808 | orchestrator | Monday 23 March 2026 01:03:38 +0000 (0:00:06.611) 0:01:06.485 ********** 2026-03-23 01:06:45.861811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.861815 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.861821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.861824 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.861828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.861831 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.861834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.861842 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.861846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.861849 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.861852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.861855 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.861858 | orchestrator | 2026-03-23 01:06:45.861861 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-23 01:06:45.861865 | orchestrator | Monday 23 March 2026 01:03:41 +0000 (0:00:02.349) 0:01:08.835 ********** 2026-03-23 01:06:45.861868 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:06:45.861872 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.861875 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.861878 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:06:45.861881 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.861884 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:06:45.861887 | orchestrator | 2026-03-23 01:06:45.861890 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-23 01:06:45.861894 | orchestrator | Monday 23 March 2026 01:03:43 +0000 (0:00:02.804) 0:01:11.639 ********** 2026-03-23 01:06:45.861897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.861902 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.861906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.861909 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.861912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.861915 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.861918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.861924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.861928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.861934 | orchestrator | 2026-03-23 01:06:45.861937 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-23 01:06:45.861940 | orchestrator | Monday 23 March 2026 01:03:48 +0000 (0:00:04.318) 0:01:15.957 ********** 2026-03-23 01:06:45.861943 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.861946 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.861949 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.861952 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.861955 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.861958 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.861961 | orchestrator | 2026-03-23 01:06:45.861964 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-23 01:06:45.861968 | orchestrator | Monday 23 March 2026 01:03:51 +0000 (0:00:03.113) 0:01:19.071 ********** 2026-03-23 01:06:45.861971 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.861974 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.861977 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.861980 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.861983 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.861986 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.861989 | orchestrator | 2026-03-23 01:06:45.861992 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-23 01:06:45.861995 | orchestrator | Monday 23 March 2026 01:03:53 +0000 (0:00:02.333) 0:01:21.404 ********** 2026-03-23 01:06:45.861998 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862001 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862004 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862007 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862010 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862041 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862044 | orchestrator | 2026-03-23 01:06:45.862048 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-23 01:06:45.862051 | orchestrator | Monday 23 March 2026 01:03:56 +0000 (0:00:02.774) 0:01:24.179 ********** 2026-03-23 01:06:45.862054 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862057 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862060 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862063 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862066 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862069 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862072 | orchestrator | 2026-03-23 01:06:45.862075 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-23 01:06:45.862078 | orchestrator | Monday 23 March 2026 01:03:59 +0000 (0:00:02.466) 0:01:26.645 ********** 2026-03-23 01:06:45.862081 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862084 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862087 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862090 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862093 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862096 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862099 | orchestrator | 2026-03-23 01:06:45.862103 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-23 01:06:45.862106 | orchestrator | Monday 23 March 2026 01:04:01 +0000 (0:00:02.381) 0:01:29.027 ********** 2026-03-23 01:06:45.862109 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862143 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862152 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862156 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862161 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862165 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862171 | orchestrator | 2026-03-23 01:06:45.862174 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-23 01:06:45.862177 | orchestrator | Monday 23 March 2026 01:04:04 +0000 (0:00:02.693) 0:01:31.720 ********** 2026-03-23 01:06:45.862181 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-23 01:06:45.862184 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862187 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-23 01:06:45.862190 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862196 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-23 01:06:45.862199 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862202 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-23 01:06:45.862205 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862209 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-23 01:06:45.862214 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862219 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-23 01:06:45.862224 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862231 | orchestrator | 2026-03-23 01:06:45.862237 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-23 01:06:45.862241 | orchestrator | Monday 23 March 2026 01:04:07 +0000 (0:00:03.166) 0:01:34.887 ********** 2026-03-23 01:06:45.862246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.862251 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.862262 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.862276 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.862292 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.862300 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.862308 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862312 | orchestrator | 2026-03-23 01:06:45.862316 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-23 01:06:45.862319 | orchestrator | Monday 23 March 2026 01:04:10 +0000 (0:00:03.548) 0:01:38.435 ********** 2026-03-23 01:06:45.862324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.862330 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.862338 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.862349 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.862357 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.862365 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.862375 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862378 | orchestrator | 2026-03-23 01:06:45.862382 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-23 01:06:45.862386 | orchestrator | Monday 23 March 2026 01:04:12 +0000 (0:00:02.135) 0:01:40.571 ********** 2026-03-23 01:06:45.862389 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862393 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862397 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862400 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862404 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862407 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862411 | orchestrator | 2026-03-23 01:06:45.862414 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-23 01:06:45.862418 | orchestrator | Monday 23 March 2026 01:04:15 +0000 (0:00:02.479) 0:01:43.051 ********** 2026-03-23 01:06:45.862422 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862425 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862429 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862433 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:06:45.862437 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:06:45.862440 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:06:45.862444 | orchestrator | 2026-03-23 01:06:45.862448 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-23 01:06:45.862452 | orchestrator | Monday 23 March 2026 01:04:20 +0000 (0:00:04.636) 0:01:47.687 ********** 2026-03-23 01:06:45.862455 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862459 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862463 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862466 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862470 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862474 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862477 | orchestrator | 2026-03-23 01:06:45.862483 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-23 01:06:45.862487 | orchestrator | Monday 23 March 2026 01:04:24 +0000 (0:00:04.195) 0:01:51.883 ********** 2026-03-23 01:06:45.862490 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862494 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862497 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862501 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862505 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862509 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862512 | orchestrator | 2026-03-23 01:06:45.862516 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-23 01:06:45.862520 | orchestrator | Monday 23 March 2026 01:04:26 +0000 (0:00:02.419) 0:01:54.303 ********** 2026-03-23 01:06:45.862523 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862527 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862531 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862534 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862538 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862542 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862548 | orchestrator | 2026-03-23 01:06:45.862552 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-23 01:06:45.862556 | orchestrator | Monday 23 March 2026 01:04:28 +0000 (0:00:01.822) 0:01:56.126 ********** 2026-03-23 01:06:45.862559 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862563 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862566 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862570 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862574 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862577 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862581 | orchestrator | 2026-03-23 01:06:45.862584 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-23 01:06:45.862588 | orchestrator | Monday 23 March 2026 01:04:30 +0000 (0:00:01.940) 0:01:58.066 ********** 2026-03-23 01:06:45.862592 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862595 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862599 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862603 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862606 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862610 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862614 | orchestrator | 2026-03-23 01:06:45.862617 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-23 01:06:45.862621 | orchestrator | Monday 23 March 2026 01:04:32 +0000 (0:00:01.892) 0:01:59.959 ********** 2026-03-23 01:06:45.862624 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862628 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862632 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862635 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862639 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862643 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862646 | orchestrator | 2026-03-23 01:06:45.862650 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-23 01:06:45.862654 | orchestrator | Monday 23 March 2026 01:04:33 +0000 (0:00:01.663) 0:02:01.623 ********** 2026-03-23 01:06:45.862657 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862661 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862665 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862669 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862672 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862676 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862680 | orchestrator | 2026-03-23 01:06:45.862684 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-23 01:06:45.862687 | orchestrator | Monday 23 March 2026 01:04:36 +0000 (0:00:02.015) 0:02:03.638 ********** 2026-03-23 01:06:45.862691 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-23 01:06:45.862695 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862699 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-23 01:06:45.862703 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862707 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-23 01:06:45.862710 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862714 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-23 01:06:45.862717 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862720 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-23 01:06:45.862731 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862736 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-23 01:06:45.862741 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862750 | orchestrator | 2026-03-23 01:06:45.862754 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-23 01:06:45.862759 | orchestrator | Monday 23 March 2026 01:04:38 +0000 (0:00:02.141) 0:02:05.780 ********** 2026-03-23 01:06:45.862769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.862775 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.862786 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-23 01:06:45.862797 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.862808 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.862817 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-23 01:06:45.862830 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862835 | orchestrator | 2026-03-23 01:06:45.862840 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-23 01:06:45.862846 | orchestrator | Monday 23 March 2026 01:04:39 +0000 (0:00:01.729) 0:02:07.509 ********** 2026-03-23 01:06:45.862851 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.862857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.862863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.862872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.862881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-23 01:06:45.862886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-23 01:06:45.862891 | orchestrator | 2026-03-23 01:06:45.862895 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-23 01:06:45.862899 | orchestrator | Monday 23 March 2026 01:04:42 +0000 (0:00:02.364) 0:02:09.874 ********** 2026-03-23 01:06:45.862905 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:45.862909 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:45.862914 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:45.862919 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:06:45.862923 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:06:45.862928 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:06:45.862932 | orchestrator | 2026-03-23 01:06:45.862938 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-23 01:06:45.862943 | orchestrator | Monday 23 March 2026 01:04:42 +0000 (0:00:00.580) 0:02:10.454 ********** 2026-03-23 01:06:45.862948 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:06:45.862953 | orchestrator | 2026-03-23 01:06:45.862957 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-23 01:06:45.862962 | orchestrator | Monday 23 March 2026 01:04:44 +0000 (0:00:01.905) 0:02:12.360 ********** 2026-03-23 01:06:45.862971 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:06:45.862977 | orchestrator | 2026-03-23 01:06:45.862982 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-23 01:06:45.862987 | orchestrator | Monday 23 March 2026 01:04:46 +0000 (0:00:01.864) 0:02:14.225 ********** 2026-03-23 01:06:45.862992 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:06:45.862997 | orchestrator | 2026-03-23 01:06:45.863000 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-23 01:06:45.863003 | orchestrator | Monday 23 March 2026 01:05:26 +0000 (0:00:39.764) 0:02:53.990 ********** 2026-03-23 01:06:45.863006 | orchestrator | 2026-03-23 01:06:45.863010 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-23 01:06:45.863013 | orchestrator | Monday 23 March 2026 01:05:26 +0000 (0:00:00.066) 0:02:54.056 ********** 2026-03-23 01:06:45.863016 | orchestrator | 2026-03-23 01:06:45.863019 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-23 01:06:45.863022 | orchestrator | Monday 23 March 2026 01:05:26 +0000 (0:00:00.066) 0:02:54.122 ********** 2026-03-23 01:06:45.863025 | orchestrator | 2026-03-23 01:06:45.863029 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-23 01:06:45.863032 | orchestrator | Monday 23 March 2026 01:05:26 +0000 (0:00:00.066) 0:02:54.189 ********** 2026-03-23 01:06:45.863035 | orchestrator | 2026-03-23 01:06:45.863038 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-23 01:06:45.863041 | orchestrator | Monday 23 March 2026 01:05:26 +0000 (0:00:00.071) 0:02:54.261 ********** 2026-03-23 01:06:45.863044 | orchestrator | 2026-03-23 01:06:45.863047 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-23 01:06:45.863050 | orchestrator | Monday 23 March 2026 01:05:26 +0000 (0:00:00.087) 0:02:54.348 ********** 2026-03-23 01:06:45.863054 | orchestrator | 2026-03-23 01:06:45.863057 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-23 01:06:45.863060 | orchestrator | Monday 23 March 2026 01:05:26 +0000 (0:00:00.068) 0:02:54.416 ********** 2026-03-23 01:06:45.863063 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:06:45.863066 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:06:45.863069 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:06:45.863072 | orchestrator | 2026-03-23 01:06:45.863075 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-23 01:06:45.863079 | orchestrator | Monday 23 March 2026 01:05:52 +0000 (0:00:26.092) 0:03:20.508 ********** 2026-03-23 01:06:45.863082 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:06:45.863085 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:06:45.863088 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:06:45.863091 | orchestrator | 2026-03-23 01:06:45.863094 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:06:45.863101 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-23 01:06:45.863106 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-23 01:06:45.863109 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-23 01:06:45.863127 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-23 01:06:45.863130 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-23 01:06:45.863134 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-23 01:06:45.863140 | orchestrator | 2026-03-23 01:06:45.863214 | orchestrator | 2026-03-23 01:06:45.863223 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:06:45.863230 | orchestrator | Monday 23 March 2026 01:06:45 +0000 (0:00:52.332) 0:04:12.840 ********** 2026-03-23 01:06:45.863236 | orchestrator | =============================================================================== 2026-03-23 01:06:45.863241 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 52.33s 2026-03-23 01:06:45.863246 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.76s 2026-03-23 01:06:45.863251 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.09s 2026-03-23 01:06:45.863299 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.07s 2026-03-23 01:06:45.863307 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.61s 2026-03-23 01:06:45.863312 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.70s 2026-03-23 01:06:45.863318 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.64s 2026-03-23 01:06:45.863323 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.32s 2026-03-23 01:06:45.863329 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.20s 2026-03-23 01:06:45.863334 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.91s 2026-03-23 01:06:45.863340 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 3.55s 2026-03-23 01:06:45.863345 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.29s 2026-03-23 01:06:45.863350 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.22s 2026-03-23 01:06:45.863356 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.20s 2026-03-23 01:06:45.863361 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.17s 2026-03-23 01:06:45.863365 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 3.17s 2026-03-23 01:06:45.863371 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 3.11s 2026-03-23 01:06:45.863376 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.08s 2026-03-23 01:06:45.863382 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 2.98s 2026-03-23 01:06:45.863387 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.96s 2026-03-23 01:06:45.863393 | orchestrator | 2026-03-23 01:06:45 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:06:45.863402 | orchestrator | 2026-03-23 01:06:45 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:45.863408 | orchestrator | 2026-03-23 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:48.902341 | orchestrator | 2026-03-23 01:06:48 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:48.903642 | orchestrator | 2026-03-23 01:06:48 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:06:48.904506 | orchestrator | 2026-03-23 01:06:48 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:06:48.907206 | orchestrator | 2026-03-23 01:06:48 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:48.907802 | orchestrator | 2026-03-23 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:51.951364 | orchestrator | 2026-03-23 01:06:51 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:51.953434 | orchestrator | 2026-03-23 01:06:51 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:06:51.954170 | orchestrator | 2026-03-23 01:06:51 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:06:51.955571 | orchestrator | 2026-03-23 01:06:51 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:51.955734 | orchestrator | 2026-03-23 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:55.005291 | orchestrator | 2026-03-23 01:06:55 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:55.005390 | orchestrator | 2026-03-23 01:06:55 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:06:55.005401 | orchestrator | 2026-03-23 01:06:55 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:06:55.005765 | orchestrator | 2026-03-23 01:06:55 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state STARTED 2026-03-23 01:06:55.005805 | orchestrator | 2026-03-23 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:58.057189 | orchestrator | 2026-03-23 01:06:58 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:06:58.060263 | orchestrator | 2026-03-23 01:06:58 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:06:58.062472 | orchestrator | 2026-03-23 01:06:58 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:06:58.064395 | orchestrator | 2026-03-23 01:06:58 | INFO  | Task 827151f2-b94f-459e-8c13-0ce449251a0c is in state STARTED 2026-03-23 01:06:58.066674 | orchestrator | 2026-03-23 01:06:58 | INFO  | Task 1b93e845-dd67-4def-a3fa-c92890b26360 is in state SUCCESS 2026-03-23 01:06:58.066730 | orchestrator | 2026-03-23 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:06:58.068367 | orchestrator | 2026-03-23 01:06:58.068432 | orchestrator | 2026-03-23 01:06:58.068447 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:06:58.068454 | orchestrator | 2026-03-23 01:06:58.068459 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:06:58.068466 | orchestrator | Monday 23 March 2026 01:05:48 +0000 (0:00:00.272) 0:00:00.272 ********** 2026-03-23 01:06:58.068472 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:06:58.068478 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:06:58.068484 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:06:58.068489 | orchestrator | 2026-03-23 01:06:58.068507 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:06:58.068513 | orchestrator | Monday 23 March 2026 01:05:49 +0000 (0:00:00.342) 0:00:00.614 ********** 2026-03-23 01:06:58.068518 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-23 01:06:58.068524 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-23 01:06:58.068529 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-23 01:06:58.068534 | orchestrator | 2026-03-23 01:06:58.068539 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-23 01:06:58.068545 | orchestrator | 2026-03-23 01:06:58.068550 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-23 01:06:58.068556 | orchestrator | Monday 23 March 2026 01:05:49 +0000 (0:00:00.605) 0:00:01.219 ********** 2026-03-23 01:06:58.068561 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:06:58.068567 | orchestrator | 2026-03-23 01:06:58.068573 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-23 01:06:58.068578 | orchestrator | Monday 23 March 2026 01:05:50 +0000 (0:00:00.891) 0:00:02.111 ********** 2026-03-23 01:06:58.068583 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-23 01:06:58.068588 | orchestrator | 2026-03-23 01:06:58.068594 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-23 01:06:58.068617 | orchestrator | Monday 23 March 2026 01:05:54 +0000 (0:00:04.085) 0:00:06.196 ********** 2026-03-23 01:06:58.068624 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-23 01:06:58.068630 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-23 01:06:58.068635 | orchestrator | 2026-03-23 01:06:58.068640 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-23 01:06:58.068645 | orchestrator | Monday 23 March 2026 01:06:01 +0000 (0:00:06.494) 0:00:12.690 ********** 2026-03-23 01:06:58.068650 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-23 01:06:58.068655 | orchestrator | 2026-03-23 01:06:58.068661 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-23 01:06:58.068666 | orchestrator | Monday 23 March 2026 01:06:04 +0000 (0:00:03.298) 0:00:15.989 ********** 2026-03-23 01:06:58.068672 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-23 01:06:58.068678 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-23 01:06:58.068684 | orchestrator | 2026-03-23 01:06:58.068689 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-23 01:06:58.068695 | orchestrator | Monday 23 March 2026 01:06:08 +0000 (0:00:04.054) 0:00:20.044 ********** 2026-03-23 01:06:58.068701 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-23 01:06:58.068707 | orchestrator | 2026-03-23 01:06:58.068713 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-23 01:06:58.068719 | orchestrator | Monday 23 March 2026 01:06:12 +0000 (0:00:03.631) 0:00:23.675 ********** 2026-03-23 01:06:58.068724 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-23 01:06:58.068730 | orchestrator | 2026-03-23 01:06:58.068735 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-23 01:06:58.068740 | orchestrator | Monday 23 March 2026 01:06:15 +0000 (0:00:03.536) 0:00:27.211 ********** 2026-03-23 01:06:58.068746 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:58.068751 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:58.068757 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:58.068763 | orchestrator | 2026-03-23 01:06:58.068769 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-23 01:06:58.068775 | orchestrator | Monday 23 March 2026 01:06:15 +0000 (0:00:00.246) 0:00:27.458 ********** 2026-03-23 01:06:58.068783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.068805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.068818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.068825 | orchestrator | 2026-03-23 01:06:58.068831 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-23 01:06:58.068836 | orchestrator | Monday 23 March 2026 01:06:17 +0000 (0:00:01.462) 0:00:28.920 ********** 2026-03-23 01:06:58.068841 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:58.068847 | orchestrator | 2026-03-23 01:06:58.068853 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-23 01:06:58.068859 | orchestrator | Monday 23 March 2026 01:06:17 +0000 (0:00:00.110) 0:00:29.030 ********** 2026-03-23 01:06:58.068865 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:58.068871 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:58.068878 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:58.068884 | orchestrator | 2026-03-23 01:06:58.068890 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-23 01:06:58.068896 | orchestrator | Monday 23 March 2026 01:06:17 +0000 (0:00:00.266) 0:00:29.297 ********** 2026-03-23 01:06:58.068902 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:06:58.068908 | orchestrator | 2026-03-23 01:06:58.068915 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-23 01:06:58.068921 | orchestrator | Monday 23 March 2026 01:06:18 +0000 (0:00:00.552) 0:00:29.850 ********** 2026-03-23 01:06:58.068927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.068939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.068950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.068957 | orchestrator | 2026-03-23 01:06:58.068963 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-23 01:06:58.068969 | orchestrator | Monday 23 March 2026 01:06:19 +0000 (0:00:01.376) 0:00:31.227 ********** 2026-03-23 01:06:58.068975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-23 01:06:58.068982 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:58.068987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-23 01:06:58.068992 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:58.069002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-23 01:06:58.069011 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:58.069018 | orchestrator | 2026-03-23 01:06:58.069024 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-23 01:06:58.069030 | orchestrator | Monday 23 March 2026 01:06:20 +0000 (0:00:00.439) 0:00:31.666 ********** 2026-03-23 01:06:58.069037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-23 01:06:58.069043 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:58.069049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-23 01:06:58.069055 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:58.069061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-23 01:06:58.069067 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:58.069073 | orchestrator | 2026-03-23 01:06:58.069079 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-23 01:06:58.069085 | orchestrator | Monday 23 March 2026 01:06:20 +0000 (0:00:00.677) 0:00:32.343 ********** 2026-03-23 01:06:58.069120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.069132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.069138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.069144 | orchestrator | 2026-03-23 01:06:58.069150 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-23 01:06:58.069156 | orchestrator | Monday 23 March 2026 01:06:22 +0000 (0:00:01.461) 0:00:33.805 ********** 2026-03-23 01:06:58.069162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.069167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.069184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.069191 | orchestrator | 2026-03-23 01:06:58.069197 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-23 01:06:58.069203 | orchestrator | Monday 23 March 2026 01:06:24 +0000 (0:00:02.098) 0:00:35.903 ********** 2026-03-23 01:06:58.069210 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-23 01:06:58.069216 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-23 01:06:58.069222 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-23 01:06:58.069228 | orchestrator | 2026-03-23 01:06:58.069235 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-23 01:06:58.069241 | orchestrator | Monday 23 March 2026 01:06:25 +0000 (0:00:01.394) 0:00:37.299 ********** 2026-03-23 01:06:58.069247 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:06:58.069253 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:06:58.069259 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:06:58.069265 | orchestrator | 2026-03-23 01:06:58.069271 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-23 01:06:58.069277 | orchestrator | Monday 23 March 2026 01:06:27 +0000 (0:00:01.443) 0:00:38.743 ********** 2026-03-23 01:06:58.069283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-23 01:06:58.069290 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:06:58.069296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-23 01:06:58.069307 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:06:58.069339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-23 01:06:58.069347 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:06:58.069353 | orchestrator | 2026-03-23 01:06:58.069359 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-23 01:06:58.069364 | orchestrator | Monday 23 March 2026 01:06:28 +0000 (0:00:00.993) 0:00:39.736 ********** 2026-03-23 01:06:58.069370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.069376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.069382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-23 01:06:58.069392 | orchestrator | 2026-03-23 01:06:58.069398 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-23 01:06:58.069403 | orchestrator | Monday 23 March 2026 01:06:29 +0000 (0:00:01.149) 0:00:40.886 ********** 2026-03-23 01:06:58.069409 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:06:58.069415 | orchestrator | 2026-03-23 01:06:58.069421 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-23 01:06:58.069427 | orchestrator | Monday 23 March 2026 01:06:31 +0000 (0:00:02.302) 0:00:43.188 ********** 2026-03-23 01:06:58.069433 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:06:58.069439 | orchestrator | 2026-03-23 01:06:58.069445 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-23 01:06:58.069451 | orchestrator | Monday 23 March 2026 01:06:33 +0000 (0:00:02.223) 0:00:45.411 ********** 2026-03-23 01:06:58.069457 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:06:58.069463 | orchestrator | 2026-03-23 01:06:58.069469 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-23 01:06:58.069475 | orchestrator | Monday 23 March 2026 01:06:46 +0000 (0:00:13.188) 0:00:58.600 ********** 2026-03-23 01:06:58.069481 | orchestrator | 2026-03-23 01:06:58.069489 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-23 01:06:58.069495 | orchestrator | Monday 23 March 2026 01:06:47 +0000 (0:00:00.120) 0:00:58.720 ********** 2026-03-23 01:06:58.069500 | orchestrator | 2026-03-23 01:06:58.069510 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-23 01:06:58.069517 | orchestrator | Monday 23 March 2026 01:06:47 +0000 (0:00:00.114) 0:00:58.835 ********** 2026-03-23 01:06:58.069523 | orchestrator | 2026-03-23 01:06:58.069528 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-23 01:06:58.069534 | orchestrator | Monday 23 March 2026 01:06:47 +0000 (0:00:00.089) 0:00:58.924 ********** 2026-03-23 01:06:58.069540 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:06:58.069546 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:06:58.069551 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:06:58.069557 | orchestrator | 2026-03-23 01:06:58.069563 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:06:58.069570 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-23 01:06:58.069576 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-23 01:06:58.069583 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-23 01:06:58.069588 | orchestrator | 2026-03-23 01:06:58.069594 | orchestrator | 2026-03-23 01:06:58.069598 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:06:58.069604 | orchestrator | Monday 23 March 2026 01:06:55 +0000 (0:00:07.969) 0:01:06.894 ********** 2026-03-23 01:06:58.069609 | orchestrator | =============================================================================== 2026-03-23 01:06:58.069615 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.19s 2026-03-23 01:06:58.069625 | orchestrator | placement : Restart placement-api container ----------------------------- 7.97s 2026-03-23 01:06:58.069631 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.49s 2026-03-23 01:06:58.069636 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.09s 2026-03-23 01:06:58.069642 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.05s 2026-03-23 01:06:58.069648 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.63s 2026-03-23 01:06:58.069654 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.54s 2026-03-23 01:06:58.069660 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.30s 2026-03-23 01:06:58.069666 | orchestrator | placement : Creating placement databases -------------------------------- 2.30s 2026-03-23 01:06:58.069672 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.22s 2026-03-23 01:06:58.069677 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.10s 2026-03-23 01:06:58.069683 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.46s 2026-03-23 01:06:58.069688 | orchestrator | placement : Copying over config.json files for services ----------------- 1.46s 2026-03-23 01:06:58.069694 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.44s 2026-03-23 01:06:58.069700 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.40s 2026-03-23 01:06:58.069706 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.38s 2026-03-23 01:06:58.069712 | orchestrator | placement : Check placement containers ---------------------------------- 1.15s 2026-03-23 01:06:58.069717 | orchestrator | placement : Copying over existing policy file --------------------------- 0.99s 2026-03-23 01:06:58.069723 | orchestrator | placement : include_tasks ----------------------------------------------- 0.89s 2026-03-23 01:06:58.069729 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.68s 2026-03-23 01:07:01.107560 | orchestrator | 2026-03-23 01:07:01 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:01.108287 | orchestrator | 2026-03-23 01:07:01 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:01.109259 | orchestrator | 2026-03-23 01:07:01 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:01.110976 | orchestrator | 2026-03-23 01:07:01 | INFO  | Task 827151f2-b94f-459e-8c13-0ce449251a0c is in state SUCCESS 2026-03-23 01:07:01.111012 | orchestrator | 2026-03-23 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:04.140582 | orchestrator | 2026-03-23 01:07:04 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:04.142415 | orchestrator | 2026-03-23 01:07:04 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:04.143003 | orchestrator | 2026-03-23 01:07:04 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:04.143749 | orchestrator | 2026-03-23 01:07:04 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:04.143780 | orchestrator | 2026-03-23 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:07.181873 | orchestrator | 2026-03-23 01:07:07 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:07.184270 | orchestrator | 2026-03-23 01:07:07 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:07.186531 | orchestrator | 2026-03-23 01:07:07 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:07.188849 | orchestrator | 2026-03-23 01:07:07 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:07.188923 | orchestrator | 2026-03-23 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:10.243462 | orchestrator | 2026-03-23 01:07:10 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:10.243757 | orchestrator | 2026-03-23 01:07:10 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:10.246393 | orchestrator | 2026-03-23 01:07:10 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:10.247641 | orchestrator | 2026-03-23 01:07:10 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:10.248475 | orchestrator | 2026-03-23 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:13.296330 | orchestrator | 2026-03-23 01:07:13 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:13.298076 | orchestrator | 2026-03-23 01:07:13 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:13.301041 | orchestrator | 2026-03-23 01:07:13 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:13.304119 | orchestrator | 2026-03-23 01:07:13 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:13.304175 | orchestrator | 2026-03-23 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:16.350823 | orchestrator | 2026-03-23 01:07:16 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:16.350888 | orchestrator | 2026-03-23 01:07:16 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:16.350896 | orchestrator | 2026-03-23 01:07:16 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:16.352562 | orchestrator | 2026-03-23 01:07:16 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:16.352622 | orchestrator | 2026-03-23 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:19.393854 | orchestrator | 2026-03-23 01:07:19 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:19.394973 | orchestrator | 2026-03-23 01:07:19 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:19.396605 | orchestrator | 2026-03-23 01:07:19 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:19.398262 | orchestrator | 2026-03-23 01:07:19 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:19.398300 | orchestrator | 2026-03-23 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:22.435630 | orchestrator | 2026-03-23 01:07:22 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:22.439187 | orchestrator | 2026-03-23 01:07:22 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:22.439955 | orchestrator | 2026-03-23 01:07:22 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:22.441765 | orchestrator | 2026-03-23 01:07:22 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:22.441795 | orchestrator | 2026-03-23 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:25.478228 | orchestrator | 2026-03-23 01:07:25 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:25.481106 | orchestrator | 2026-03-23 01:07:25 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:25.483311 | orchestrator | 2026-03-23 01:07:25 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:25.485343 | orchestrator | 2026-03-23 01:07:25 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:25.485402 | orchestrator | 2026-03-23 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:28.528000 | orchestrator | 2026-03-23 01:07:28 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:28.530410 | orchestrator | 2026-03-23 01:07:28 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:28.532602 | orchestrator | 2026-03-23 01:07:28 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:28.534615 | orchestrator | 2026-03-23 01:07:28 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:28.534656 | orchestrator | 2026-03-23 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:31.589382 | orchestrator | 2026-03-23 01:07:31 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:31.591291 | orchestrator | 2026-03-23 01:07:31 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:31.592500 | orchestrator | 2026-03-23 01:07:31 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:31.593766 | orchestrator | 2026-03-23 01:07:31 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:31.593813 | orchestrator | 2026-03-23 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:34.629025 | orchestrator | 2026-03-23 01:07:34 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:34.630387 | orchestrator | 2026-03-23 01:07:34 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:34.631501 | orchestrator | 2026-03-23 01:07:34 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:34.632573 | orchestrator | 2026-03-23 01:07:34 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:34.632603 | orchestrator | 2026-03-23 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:37.671008 | orchestrator | 2026-03-23 01:07:37 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:37.671131 | orchestrator | 2026-03-23 01:07:37 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:37.671710 | orchestrator | 2026-03-23 01:07:37 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:37.672909 | orchestrator | 2026-03-23 01:07:37 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:37.672941 | orchestrator | 2026-03-23 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:40.706174 | orchestrator | 2026-03-23 01:07:40 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:40.707191 | orchestrator | 2026-03-23 01:07:40 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:40.708306 | orchestrator | 2026-03-23 01:07:40 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:40.709098 | orchestrator | 2026-03-23 01:07:40 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:40.709237 | orchestrator | 2026-03-23 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:43.738443 | orchestrator | 2026-03-23 01:07:43 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:43.740720 | orchestrator | 2026-03-23 01:07:43 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:43.742647 | orchestrator | 2026-03-23 01:07:43 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:43.744391 | orchestrator | 2026-03-23 01:07:43 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:43.744521 | orchestrator | 2026-03-23 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:46.797421 | orchestrator | 2026-03-23 01:07:46 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:46.798864 | orchestrator | 2026-03-23 01:07:46 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:46.801672 | orchestrator | 2026-03-23 01:07:46 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:46.804785 | orchestrator | 2026-03-23 01:07:46 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:46.804884 | orchestrator | 2026-03-23 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:49.848146 | orchestrator | 2026-03-23 01:07:49 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:49.848992 | orchestrator | 2026-03-23 01:07:49 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:49.850246 | orchestrator | 2026-03-23 01:07:49 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:49.851524 | orchestrator | 2026-03-23 01:07:49 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:49.851655 | orchestrator | 2026-03-23 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:52.895405 | orchestrator | 2026-03-23 01:07:52 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:52.897088 | orchestrator | 2026-03-23 01:07:52 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:52.898646 | orchestrator | 2026-03-23 01:07:52 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:52.900355 | orchestrator | 2026-03-23 01:07:52 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:52.900401 | orchestrator | 2026-03-23 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:55.951640 | orchestrator | 2026-03-23 01:07:55 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:55.951695 | orchestrator | 2026-03-23 01:07:55 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:55.953207 | orchestrator | 2026-03-23 01:07:55 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:55.954078 | orchestrator | 2026-03-23 01:07:55 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:55.954107 | orchestrator | 2026-03-23 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:07:58.994732 | orchestrator | 2026-03-23 01:07:58 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:07:58.996144 | orchestrator | 2026-03-23 01:07:58 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:07:58.997606 | orchestrator | 2026-03-23 01:07:58 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:07:58.999080 | orchestrator | 2026-03-23 01:07:59 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:07:58.999135 | orchestrator | 2026-03-23 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:02.061228 | orchestrator | 2026-03-23 01:08:02 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:02.062040 | orchestrator | 2026-03-23 01:08:02 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:08:02.062950 | orchestrator | 2026-03-23 01:08:02 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:02.067795 | orchestrator | 2026-03-23 01:08:02 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:02.067855 | orchestrator | 2026-03-23 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:05.092484 | orchestrator | 2026-03-23 01:08:05 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:05.093483 | orchestrator | 2026-03-23 01:08:05 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:08:05.095048 | orchestrator | 2026-03-23 01:08:05 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:05.095631 | orchestrator | 2026-03-23 01:08:05 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:05.095657 | orchestrator | 2026-03-23 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:08.136270 | orchestrator | 2026-03-23 01:08:08 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:08.137925 | orchestrator | 2026-03-23 01:08:08 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:08:08.139655 | orchestrator | 2026-03-23 01:08:08 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:08.140939 | orchestrator | 2026-03-23 01:08:08 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:08.141014 | orchestrator | 2026-03-23 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:11.185884 | orchestrator | 2026-03-23 01:08:11 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:11.186940 | orchestrator | 2026-03-23 01:08:11 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:08:11.189530 | orchestrator | 2026-03-23 01:08:11 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:11.192721 | orchestrator | 2026-03-23 01:08:11 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:11.192778 | orchestrator | 2026-03-23 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:14.230355 | orchestrator | 2026-03-23 01:08:14 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:14.232597 | orchestrator | 2026-03-23 01:08:14 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:08:14.234925 | orchestrator | 2026-03-23 01:08:14 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:14.237733 | orchestrator | 2026-03-23 01:08:14 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:14.237782 | orchestrator | 2026-03-23 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:17.281670 | orchestrator | 2026-03-23 01:08:17 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:17.283201 | orchestrator | 2026-03-23 01:08:17 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:08:17.285068 | orchestrator | 2026-03-23 01:08:17 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:17.287509 | orchestrator | 2026-03-23 01:08:17 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:17.287593 | orchestrator | 2026-03-23 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:20.334120 | orchestrator | 2026-03-23 01:08:20 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:20.335210 | orchestrator | 2026-03-23 01:08:20 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state STARTED 2026-03-23 01:08:20.336618 | orchestrator | 2026-03-23 01:08:20 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:20.338874 | orchestrator | 2026-03-23 01:08:20 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:20.339019 | orchestrator | 2026-03-23 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:23.382942 | orchestrator | 2026-03-23 01:08:23 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:23.383039 | orchestrator | 2026-03-23 01:08:23 | INFO  | Task bd1bc78b-95b8-4e3b-9fc3-c04f233b5710 is in state SUCCESS 2026-03-23 01:08:23.383996 | orchestrator | 2026-03-23 01:08:23.384205 | orchestrator | 2026-03-23 01:08:23.384217 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:08:23.384223 | orchestrator | 2026-03-23 01:08:23.384229 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:08:23.384234 | orchestrator | Monday 23 March 2026 01:06:58 +0000 (0:00:00.183) 0:00:00.183 ********** 2026-03-23 01:08:23.384239 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:08:23.384245 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:08:23.384250 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:08:23.384255 | orchestrator | 2026-03-23 01:08:23.384261 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:08:23.384267 | orchestrator | Monday 23 March 2026 01:06:58 +0000 (0:00:00.335) 0:00:00.519 ********** 2026-03-23 01:08:23.384272 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-23 01:08:23.384278 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-23 01:08:23.384283 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-23 01:08:23.384288 | orchestrator | 2026-03-23 01:08:23.384294 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-23 01:08:23.384299 | orchestrator | 2026-03-23 01:08:23.384304 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-23 01:08:23.384310 | orchestrator | Monday 23 March 2026 01:06:59 +0000 (0:00:00.509) 0:00:01.028 ********** 2026-03-23 01:08:23.384315 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:08:23.384320 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:08:23.384324 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:08:23.384329 | orchestrator | 2026-03-23 01:08:23.384334 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:08:23.384340 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:08:23.384347 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:08:23.384352 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:08:23.384358 | orchestrator | 2026-03-23 01:08:23.384362 | orchestrator | 2026-03-23 01:08:23.384368 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:08:23.384373 | orchestrator | Monday 23 March 2026 01:07:00 +0000 (0:00:01.211) 0:00:02.240 ********** 2026-03-23 01:08:23.384398 | orchestrator | =============================================================================== 2026-03-23 01:08:23.384419 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.21s 2026-03-23 01:08:23.384425 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-03-23 01:08:23.384429 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-03-23 01:08:23.384434 | orchestrator | 2026-03-23 01:08:23.384438 | orchestrator | 2026-03-23 01:08:23.384443 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:08:23.384448 | orchestrator | 2026-03-23 01:08:23.384453 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:08:23.384457 | orchestrator | Monday 23 March 2026 01:06:30 +0000 (0:00:00.230) 0:00:00.230 ********** 2026-03-23 01:08:23.384462 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:08:23.384467 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:08:23.384471 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:08:23.384476 | orchestrator | 2026-03-23 01:08:23.384482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:08:23.384528 | orchestrator | Monday 23 March 2026 01:06:31 +0000 (0:00:00.266) 0:00:00.497 ********** 2026-03-23 01:08:23.384532 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-23 01:08:23.384535 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-23 01:08:23.384538 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-23 01:08:23.384541 | orchestrator | 2026-03-23 01:08:23.384544 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-23 01:08:23.384548 | orchestrator | 2026-03-23 01:08:23.384551 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-23 01:08:23.384554 | orchestrator | Monday 23 March 2026 01:06:31 +0000 (0:00:00.269) 0:00:00.766 ********** 2026-03-23 01:08:23.384557 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:08:23.384560 | orchestrator | 2026-03-23 01:08:23.384662 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-23 01:08:23.384672 | orchestrator | Monday 23 March 2026 01:06:32 +0000 (0:00:00.540) 0:00:01.306 ********** 2026-03-23 01:08:23.384677 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-23 01:08:23.384681 | orchestrator | 2026-03-23 01:08:23.384685 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-23 01:08:23.384689 | orchestrator | Monday 23 March 2026 01:06:35 +0000 (0:00:03.830) 0:00:05.136 ********** 2026-03-23 01:08:23.384693 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-23 01:08:23.384697 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-23 01:08:23.384701 | orchestrator | 2026-03-23 01:08:23.384705 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-23 01:08:23.384708 | orchestrator | Monday 23 March 2026 01:06:41 +0000 (0:00:05.733) 0:00:10.869 ********** 2026-03-23 01:08:23.384712 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-23 01:08:23.384715 | orchestrator | 2026-03-23 01:08:23.384719 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-23 01:08:23.384723 | orchestrator | Monday 23 March 2026 01:06:44 +0000 (0:00:03.205) 0:00:14.075 ********** 2026-03-23 01:08:23.384737 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-23 01:08:23.384741 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-23 01:08:23.384745 | orchestrator | 2026-03-23 01:08:23.384749 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-23 01:08:23.384752 | orchestrator | Monday 23 March 2026 01:06:49 +0000 (0:00:04.175) 0:00:18.251 ********** 2026-03-23 01:08:23.384756 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-23 01:08:23.384759 | orchestrator | 2026-03-23 01:08:23.384763 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-23 01:08:23.384772 | orchestrator | Monday 23 March 2026 01:06:52 +0000 (0:00:03.723) 0:00:21.975 ********** 2026-03-23 01:08:23.384776 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-23 01:08:23.384779 | orchestrator | 2026-03-23 01:08:23.384783 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-23 01:08:23.384786 | orchestrator | Monday 23 March 2026 01:06:56 +0000 (0:00:04.003) 0:00:25.978 ********** 2026-03-23 01:08:23.384790 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:08:23.384793 | orchestrator | 2026-03-23 01:08:23.384797 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-23 01:08:23.384801 | orchestrator | Monday 23 March 2026 01:07:00 +0000 (0:00:03.541) 0:00:29.520 ********** 2026-03-23 01:08:23.384804 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:08:23.384808 | orchestrator | 2026-03-23 01:08:23.384812 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-23 01:08:23.384815 | orchestrator | Monday 23 March 2026 01:07:04 +0000 (0:00:04.422) 0:00:33.943 ********** 2026-03-23 01:08:23.384819 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:08:23.384822 | orchestrator | 2026-03-23 01:08:23.384826 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-23 01:08:23.384829 | orchestrator | Monday 23 March 2026 01:07:09 +0000 (0:00:04.358) 0:00:38.302 ********** 2026-03-23 01:08:23.384839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.384845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.384848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.384857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.384862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.384868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.384871 | orchestrator | 2026-03-23 01:08:23.384875 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-23 01:08:23.384879 | orchestrator | Monday 23 March 2026 01:07:11 +0000 (0:00:01.987) 0:00:40.289 ********** 2026-03-23 01:08:23.384882 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:08:23.384886 | orchestrator | 2026-03-23 01:08:23.384890 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-23 01:08:23.384893 | orchestrator | Monday 23 March 2026 01:07:11 +0000 (0:00:00.113) 0:00:40.403 ********** 2026-03-23 01:08:23.384896 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:08:23.384900 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:08:23.384906 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:08:23.384911 | orchestrator | 2026-03-23 01:08:23.384916 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-23 01:08:23.384921 | orchestrator | Monday 23 March 2026 01:07:11 +0000 (0:00:00.299) 0:00:40.703 ********** 2026-03-23 01:08:23.384926 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 01:08:23.384931 | orchestrator | 2026-03-23 01:08:23.384936 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-23 01:08:23.384942 | orchestrator | Monday 23 March 2026 01:07:12 +0000 (0:00:00.894) 0:00:41.597 ********** 2026-03-23 01:08:23.384957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.384970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.384976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.384985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.384988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.384991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.384997 | orchestrator | 2026-03-23 01:08:23.385001 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-23 01:08:23.385006 | orchestrator | Monday 23 March 2026 01:07:15 +0000 (0:00:02.836) 0:00:44.434 ********** 2026-03-23 01:08:23.385011 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:08:23.385019 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:08:23.385024 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:08:23.385029 | orchestrator | 2026-03-23 01:08:23.385034 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-23 01:08:23.385042 | orchestrator | Monday 23 March 2026 01:07:15 +0000 (0:00:00.460) 0:00:44.894 ********** 2026-03-23 01:08:23.385047 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:08:23.385052 | orchestrator | 2026-03-23 01:08:23.385057 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-23 01:08:23.385062 | orchestrator | Monday 23 March 2026 01:07:16 +0000 (0:00:00.515) 0:00:45.409 ********** 2026-03-23 01:08:23.385067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.385074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.385079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.385088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.385098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.385104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.385109 | orchestrator | 2026-03-23 01:08:23.385114 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-23 01:08:23.385118 | orchestrator | Monday 23 March 2026 01:07:18 +0000 (0:00:02.598) 0:00:48.007 ********** 2026-03-23 01:08:23.385125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-23 01:08:23.385131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:08:23.385139 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:08:23.385144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-23 01:08:23.385153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:08:23.385157 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:08:23.385162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-23 01:08:23.385169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:08:23.385174 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:08:23.385189 | orchestrator | 2026-03-23 01:08:23.385200 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-23 01:08:23.385205 | orchestrator | Monday 23 March 2026 01:07:19 +0000 (0:00:01.058) 0:00:49.065 ********** 2026-03-23 01:08:23.385210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-23 01:08:23.385215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:08:23.385223 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:08:23.385234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-23 01:08:23.385240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:08:23.385244 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:08:23.385251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-23 01:08:23.385260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:08:23.385264 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:08:23.385269 | orchestrator | 2026-03-23 01:08:23.385274 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-23 01:08:23.385279 | orchestrator | Monday 23 March 2026 01:07:20 +0000 (0:00:00.995) 0:00:50.061 ********** 2026-03-23 01:08:23.385465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.385476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.385483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.385490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.385494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.385501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.385504 | orchestrator | 2026-03-23 01:08:23.385507 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-23 01:08:23.385511 | orchestrator | Monday 23 March 2026 01:07:23 +0000 (0:00:02.296) 0:00:52.357 ********** 2026-03-23 01:08:23.385514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.385519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.385525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.385528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.385534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.385538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.385541 | orchestrator | 2026-03-23 01:08:23.385544 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-23 01:08:23.385547 | orchestrator | Monday 23 March 2026 01:07:28 +0000 (0:00:05.370) 0:00:57.728 ********** 2026-03-23 01:08:23.385555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-23 01:08:23.385558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:08:23.385561 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:08:23.385565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-23 01:08:23.385570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:08:23.385574 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:08:23.385577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-23 01:08:23.385584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:08:23.385588 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:08:23.385591 | orchestrator | 2026-03-23 01:08:23.385594 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-23 01:08:23.385597 | orchestrator | Monday 23 March 2026 01:07:29 +0000 (0:00:00.608) 0:00:58.336 ********** 2026-03-23 01:08:23.385600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.385605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.385609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-23 01:08:23.385616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.385619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.385622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:08:23.385626 | orchestrator | 2026-03-23 01:08:23.385629 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-23 01:08:23.385632 | orchestrator | Monday 23 March 2026 01:07:31 +0000 (0:00:02.343) 0:01:00.680 ********** 2026-03-23 01:08:23.385635 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:08:23.385638 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:08:23.385641 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:08:23.385644 | orchestrator | 2026-03-23 01:08:23.385647 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-23 01:08:23.385650 | orchestrator | Monday 23 March 2026 01:07:31 +0000 (0:00:00.252) 0:01:00.933 ********** 2026-03-23 01:08:23.385654 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:08:23.385657 | orchestrator | 2026-03-23 01:08:23.385660 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-23 01:08:23.385663 | orchestrator | Monday 23 March 2026 01:07:33 +0000 (0:00:02.184) 0:01:03.117 ********** 2026-03-23 01:08:23.385666 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:08:23.385669 | orchestrator | 2026-03-23 01:08:23.385672 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-23 01:08:23.385675 | orchestrator | Monday 23 March 2026 01:07:36 +0000 (0:00:02.415) 0:01:05.533 ********** 2026-03-23 01:08:23.385680 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:08:23.385683 | orchestrator | 2026-03-23 01:08:23.385686 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-23 01:08:23.385689 | orchestrator | Monday 23 March 2026 01:07:53 +0000 (0:00:17.439) 0:01:22.972 ********** 2026-03-23 01:08:23.385694 | orchestrator | 2026-03-23 01:08:23.385697 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-23 01:08:23.385700 | orchestrator | Monday 23 March 2026 01:07:53 +0000 (0:00:00.215) 0:01:23.188 ********** 2026-03-23 01:08:23.385703 | orchestrator | 2026-03-23 01:08:23.385706 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-23 01:08:23.385710 | orchestrator | Monday 23 March 2026 01:07:54 +0000 (0:00:00.064) 0:01:23.252 ********** 2026-03-23 01:08:23.385713 | orchestrator | 2026-03-23 01:08:23.385716 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-23 01:08:23.385719 | orchestrator | Monday 23 March 2026 01:07:54 +0000 (0:00:00.063) 0:01:23.316 ********** 2026-03-23 01:08:23.385722 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:08:23.385725 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:08:23.385728 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:08:23.385731 | orchestrator | 2026-03-23 01:08:23.385734 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-23 01:08:23.385737 | orchestrator | Monday 23 March 2026 01:08:06 +0000 (0:00:12.535) 0:01:35.851 ********** 2026-03-23 01:08:23.385740 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:08:23.385743 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:08:23.385746 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:08:23.385749 | orchestrator | 2026-03-23 01:08:23.385753 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:08:23.385756 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-23 01:08:23.385760 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-23 01:08:23.385763 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-23 01:08:23.385766 | orchestrator | 2026-03-23 01:08:23.385769 | orchestrator | 2026-03-23 01:08:23.385772 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:08:23.385777 | orchestrator | Monday 23 March 2026 01:08:21 +0000 (0:00:14.589) 0:01:50.441 ********** 2026-03-23 01:08:23.385780 | orchestrator | =============================================================================== 2026-03-23 01:08:23.385783 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.44s 2026-03-23 01:08:23.385786 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.59s 2026-03-23 01:08:23.385789 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.54s 2026-03-23 01:08:23.385792 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.73s 2026-03-23 01:08:23.385795 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.37s 2026-03-23 01:08:23.385798 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.42s 2026-03-23 01:08:23.385801 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.36s 2026-03-23 01:08:23.385804 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.18s 2026-03-23 01:08:23.385807 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.00s 2026-03-23 01:08:23.385810 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.83s 2026-03-23 01:08:23.385813 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.72s 2026-03-23 01:08:23.385817 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.54s 2026-03-23 01:08:23.385820 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.21s 2026-03-23 01:08:23.385823 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.84s 2026-03-23 01:08:23.385829 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.60s 2026-03-23 01:08:23.385833 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.42s 2026-03-23 01:08:23.385836 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.34s 2026-03-23 01:08:23.385839 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.30s 2026-03-23 01:08:23.385842 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.18s 2026-03-23 01:08:23.385845 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.99s 2026-03-23 01:08:23.385848 | orchestrator | 2026-03-23 01:08:23 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:23.386081 | orchestrator | 2026-03-23 01:08:23 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:23.387002 | orchestrator | 2026-03-23 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:26.414824 | orchestrator | 2026-03-23 01:08:26 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:26.415189 | orchestrator | 2026-03-23 01:08:26 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:26.416695 | orchestrator | 2026-03-23 01:08:26 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:26.416715 | orchestrator | 2026-03-23 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:29.439867 | orchestrator | 2026-03-23 01:08:29 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:29.440177 | orchestrator | 2026-03-23 01:08:29 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:29.440848 | orchestrator | 2026-03-23 01:08:29 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:29.440880 | orchestrator | 2026-03-23 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:32.472683 | orchestrator | 2026-03-23 01:08:32 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:32.472923 | orchestrator | 2026-03-23 01:08:32 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:32.473759 | orchestrator | 2026-03-23 01:08:32 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:32.473790 | orchestrator | 2026-03-23 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:35.506834 | orchestrator | 2026-03-23 01:08:35 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:35.507085 | orchestrator | 2026-03-23 01:08:35 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:35.507787 | orchestrator | 2026-03-23 01:08:35 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:35.507874 | orchestrator | 2026-03-23 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:38.552136 | orchestrator | 2026-03-23 01:08:38 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:38.554600 | orchestrator | 2026-03-23 01:08:38 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:38.556299 | orchestrator | 2026-03-23 01:08:38 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:38.556525 | orchestrator | 2026-03-23 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:41.611267 | orchestrator | 2026-03-23 01:08:41 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:41.611813 | orchestrator | 2026-03-23 01:08:41 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:41.613113 | orchestrator | 2026-03-23 01:08:41 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:41.613190 | orchestrator | 2026-03-23 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:44.675494 | orchestrator | 2026-03-23 01:08:44 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:44.677479 | orchestrator | 2026-03-23 01:08:44 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state STARTED 2026-03-23 01:08:44.679627 | orchestrator | 2026-03-23 01:08:44 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:44.679678 | orchestrator | 2026-03-23 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:47.726180 | orchestrator | 2026-03-23 01:08:47 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:47.731223 | orchestrator | 2026-03-23 01:08:47 | INFO  | Task 87ab3e20-2804-47d4-86bc-14bf20c6bc6d is in state SUCCESS 2026-03-23 01:08:47.733333 | orchestrator | 2026-03-23 01:08:47.733385 | orchestrator | 2026-03-23 01:08:47.733390 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:08:47.733394 | orchestrator | 2026-03-23 01:08:47.733397 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:08:47.733401 | orchestrator | Monday 23 March 2026 01:06:48 +0000 (0:00:00.309) 0:00:00.309 ********** 2026-03-23 01:08:47.733406 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:08:47.733412 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:08:47.733418 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:08:47.733423 | orchestrator | 2026-03-23 01:08:47.733428 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:08:47.733433 | orchestrator | Monday 23 March 2026 01:06:49 +0000 (0:00:00.292) 0:00:00.602 ********** 2026-03-23 01:08:47.733439 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-23 01:08:47.733446 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-23 01:08:47.733451 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-23 01:08:47.733456 | orchestrator | 2026-03-23 01:08:47.733462 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-23 01:08:47.733467 | orchestrator | 2026-03-23 01:08:47.733472 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-23 01:08:47.733477 | orchestrator | Monday 23 March 2026 01:06:49 +0000 (0:00:00.309) 0:00:00.911 ********** 2026-03-23 01:08:47.733483 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:08:47.733487 | orchestrator | 2026-03-23 01:08:47.733492 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-23 01:08:47.733497 | orchestrator | Monday 23 March 2026 01:06:50 +0000 (0:00:00.593) 0:00:01.505 ********** 2026-03-23 01:08:47.733506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.733551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.733581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.733588 | orchestrator | 2026-03-23 01:08:47.733593 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-23 01:08:47.733598 | orchestrator | Monday 23 March 2026 01:06:51 +0000 (0:00:01.128) 0:00:02.633 ********** 2026-03-23 01:08:47.733603 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-23 01:08:47.733609 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-23 01:08:47.733613 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 01:08:47.733618 | orchestrator | 2026-03-23 01:08:47.733624 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-23 01:08:47.733629 | orchestrator | Monday 23 March 2026 01:06:52 +0000 (0:00:00.896) 0:00:03.529 ********** 2026-03-23 01:08:47.733634 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:08:47.733639 | orchestrator | 2026-03-23 01:08:47.733644 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-23 01:08:47.733648 | orchestrator | Monday 23 March 2026 01:06:52 +0000 (0:00:00.507) 0:00:04.037 ********** 2026-03-23 01:08:47.733665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.733671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.733676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.733687 | orchestrator | 2026-03-23 01:08:47.733692 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-23 01:08:47.733698 | orchestrator | Monday 23 March 2026 01:06:54 +0000 (0:00:01.459) 0:00:05.497 ********** 2026-03-23 01:08:47.733711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-23 01:08:47.733719 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:08:47.733724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-23 01:08:47.733729 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:08:47.733737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-23 01:08:47.733743 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:08:47.733749 | orchestrator | 2026-03-23 01:08:47.733754 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-23 01:08:47.733759 | orchestrator | Monday 23 March 2026 01:06:54 +0000 (0:00:00.343) 0:00:05.840 ********** 2026-03-23 01:08:47.733764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-23 01:08:47.733770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-23 01:08:47.733780 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:08:47.733785 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:08:47.733790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-23 01:08:47.733795 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:08:47.733801 | orchestrator | 2026-03-23 01:08:47.733806 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-23 01:08:47.733811 | orchestrator | Monday 23 March 2026 01:06:54 +0000 (0:00:00.541) 0:00:06.382 ********** 2026-03-23 01:08:47.733819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.733824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.733833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.733839 | orchestrator | 2026-03-23 01:08:47.733844 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-23 01:08:47.733848 | orchestrator | Monday 23 March 2026 01:06:56 +0000 (0:00:01.541) 0:00:07.923 ********** 2026-03-23 01:08:47.733853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.733862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.733867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.733872 | orchestrator | 2026-03-23 01:08:47.734006 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-23 01:08:47.734071 | orchestrator | Monday 23 March 2026 01:06:57 +0000 (0:00:01.465) 0:00:09.388 ********** 2026-03-23 01:08:47.734079 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:08:47.734085 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:08:47.734090 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:08:47.734096 | orchestrator | 2026-03-23 01:08:47.734100 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-23 01:08:47.734104 | orchestrator | Monday 23 March 2026 01:06:58 +0000 (0:00:00.300) 0:00:09.688 ********** 2026-03-23 01:08:47.734108 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-23 01:08:47.734112 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-23 01:08:47.734116 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-23 01:08:47.734119 | orchestrator | 2026-03-23 01:08:47.734123 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-23 01:08:47.734127 | orchestrator | Monday 23 March 2026 01:06:59 +0000 (0:00:01.239) 0:00:10.928 ********** 2026-03-23 01:08:47.734131 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-23 01:08:47.734135 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-23 01:08:47.734138 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-23 01:08:47.734142 | orchestrator | 2026-03-23 01:08:47.734146 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-23 01:08:47.734149 | orchestrator | Monday 23 March 2026 01:07:00 +0000 (0:00:01.315) 0:00:12.243 ********** 2026-03-23 01:08:47.734160 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 01:08:47.734165 | orchestrator | 2026-03-23 01:08:47.734179 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-23 01:08:47.734185 | orchestrator | Monday 23 March 2026 01:07:01 +0000 (0:00:00.946) 0:00:13.190 ********** 2026-03-23 01:08:47.734190 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-23 01:08:47.734195 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-23 01:08:47.734200 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:08:47.734207 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:08:47.734212 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:08:47.734217 | orchestrator | 2026-03-23 01:08:47.734223 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-23 01:08:47.734228 | orchestrator | Monday 23 March 2026 01:07:02 +0000 (0:00:00.996) 0:00:14.186 ********** 2026-03-23 01:08:47.734233 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:08:47.734239 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:08:47.734244 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:08:47.734249 | orchestrator | 2026-03-23 01:08:47.734254 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-23 01:08:47.734260 | orchestrator | Monday 23 March 2026 01:07:03 +0000 (0:00:00.481) 0:00:14.667 ********** 2026-03-23 01:08:47.734266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1313434, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5327735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1313434, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5327735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1313434, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5327735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1313465, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5581913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1313465, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5581913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1313465, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5581913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1313526, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6211927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1313526, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6211927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1313526, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6211927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1313462, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.539191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1313462, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.539191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1313462, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.539191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1313529, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6261928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1313529, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6261928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1313529, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6261928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1313441, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5342715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1313441, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5342715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1313441, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5342715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1313485, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5630734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1313485, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5630734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1313485, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5630734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1313513, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6191647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1313513, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6191647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1313513, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6191647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1313432, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.53138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1313432, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.53138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1313432, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.53138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1313439, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5333807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1313439, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5333807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1313439, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5333807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1313463, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5398836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1313463, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5398836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1313463, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5398836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1313497, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6151824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1313497, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6151824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1313497, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6151824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1313523, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6208043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1313523, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6208043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1313523, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6208043, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1313458, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5390034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1313458, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5390034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1313458, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5390034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1313508, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6173592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1313508, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6173592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1313508, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6173592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1313538, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6276426, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1313538, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6276426, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1313538, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6276426, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1313489, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6131926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1313489, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6131926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1313489, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6131926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1313484, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.560255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1313484, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.560255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1313484, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.560255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1313483, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1313483, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1313483, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1313500, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6161926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1313500, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6161926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1313500, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6161926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1313482, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1313482, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1313482, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1313519, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6191928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1313519, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6191928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1313519, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6191928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1313444, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5381908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1313444, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5381908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1313444, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.5381908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1313736, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6991947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1313736, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6991947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1313736, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6991947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1313578, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6441934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1313578, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6441934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1313578, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6441934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1313564, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6355805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1313564, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6355805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1313564, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6355805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1313591, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6471934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1313591, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6471934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1313591, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6471934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1313545, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6323643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1313545, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6323643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1313545, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6323643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1313696, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6791942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1313696, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6791942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1313696, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6791942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1313594, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.676194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1313594, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.676194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1313594, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.676194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1313699, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6805005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1313699, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6805005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1313699, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6805005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1313732, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6981945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1313732, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6981945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.734996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1313691, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.678649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1313732, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6981945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1313586, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6461933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1313691, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.678649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1313691, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.678649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1313586, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6461933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1313586, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6461933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1313572, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6391933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1313572, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6391933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1313572, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6391933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1313584, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6441934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1313567, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6382723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1313584, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6441934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1313584, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6441934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1313587, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6461933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1313567, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6382723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1313567, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6382723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1313722, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6971946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1313587, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6461933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1313587, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6461933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1313705, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6891944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1313722, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6971946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1313722, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6971946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1313558, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.633193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1313705, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6891944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1313705, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6891944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1313561, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6343687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1313558, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.633193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1313558, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.633193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1313684, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.67751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1313561, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6343687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1313561, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.6343687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1313684, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.67751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1313704, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.681194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1313684, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.67751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1313704, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.681194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1313704, 'dev': 139, 'nlink': 1, 'atime': 1774224151.0, 'mtime': 1774224151.0, 'ctime': 1774225131.681194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-23 01:08:47.735423 | orchestrator | 2026-03-23 01:08:47.735429 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-23 01:08:47.735434 | orchestrator | Monday 23 March 2026 01:07:44 +0000 (0:00:41.488) 0:00:56.156 ********** 2026-03-23 01:08:47.735439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.735445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.735454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-23 01:08:47.735459 | orchestrator | 2026-03-23 01:08:47.735464 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-23 01:08:47.735469 | orchestrator | Monday 23 March 2026 01:07:45 +0000 (0:00:01.323) 0:00:57.479 ********** 2026-03-23 01:08:47.735474 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:08:47.735480 | orchestrator | 2026-03-23 01:08:47.735485 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-23 01:08:47.735490 | orchestrator | Monday 23 March 2026 01:07:48 +0000 (0:00:02.209) 0:00:59.688 ********** 2026-03-23 01:08:47.735495 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:08:47.735500 | orchestrator | 2026-03-23 01:08:47.735505 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-23 01:08:47.735511 | orchestrator | Monday 23 March 2026 01:07:50 +0000 (0:00:02.028) 0:01:01.717 ********** 2026-03-23 01:08:47.735520 | orchestrator | 2026-03-23 01:08:47.735525 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-23 01:08:47.735530 | orchestrator | Monday 23 March 2026 01:07:50 +0000 (0:00:00.080) 0:01:01.798 ********** 2026-03-23 01:08:47.735535 | orchestrator | 2026-03-23 01:08:47.735541 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-23 01:08:47.735546 | orchestrator | Monday 23 March 2026 01:07:50 +0000 (0:00:00.081) 0:01:01.880 ********** 2026-03-23 01:08:47.735551 | orchestrator | 2026-03-23 01:08:47.735557 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-23 01:08:47.735562 | orchestrator | Monday 23 March 2026 01:07:50 +0000 (0:00:00.072) 0:01:01.952 ********** 2026-03-23 01:08:47.735567 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:08:47.735575 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:08:47.735581 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:08:47.735586 | orchestrator | 2026-03-23 01:08:47.735592 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-23 01:08:47.735597 | orchestrator | Monday 23 March 2026 01:07:52 +0000 (0:00:01.845) 0:01:03.798 ********** 2026-03-23 01:08:47.735602 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:08:47.735607 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:08:47.735612 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-23 01:08:47.735618 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-23 01:08:47.735623 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:08:47.735629 | orchestrator | 2026-03-23 01:08:47.735634 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-23 01:08:47.735639 | orchestrator | Monday 23 March 2026 01:08:19 +0000 (0:00:27.053) 0:01:30.852 ********** 2026-03-23 01:08:47.735644 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:08:47.735650 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:08:47.735655 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:08:47.735660 | orchestrator | 2026-03-23 01:08:47.735665 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-23 01:08:47.735670 | orchestrator | Monday 23 March 2026 01:08:41 +0000 (0:00:22.044) 0:01:52.896 ********** 2026-03-23 01:08:47.735675 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:08:47.735680 | orchestrator | 2026-03-23 01:08:47.735685 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-23 01:08:47.735690 | orchestrator | Monday 23 March 2026 01:08:43 +0000 (0:00:02.311) 0:01:55.207 ********** 2026-03-23 01:08:47.735695 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:08:47.735700 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:08:47.735706 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:08:47.735711 | orchestrator | 2026-03-23 01:08:47.735715 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-23 01:08:47.735721 | orchestrator | Monday 23 March 2026 01:08:44 +0000 (0:00:00.324) 0:01:55.531 ********** 2026-03-23 01:08:47.735726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-23 01:08:47.735730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-23 01:08:47.735734 | orchestrator | 2026-03-23 01:08:47.735737 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-23 01:08:47.735740 | orchestrator | Monday 23 March 2026 01:08:46 +0000 (0:00:02.297) 0:01:57.829 ********** 2026-03-23 01:08:47.735746 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:08:47.735749 | orchestrator | 2026-03-23 01:08:47.735753 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:08:47.735757 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 01:08:47.735762 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 01:08:47.735766 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 01:08:47.735769 | orchestrator | 2026-03-23 01:08:47.735772 | orchestrator | 2026-03-23 01:08:47.735775 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:08:47.735778 | orchestrator | Monday 23 March 2026 01:08:46 +0000 (0:00:00.242) 0:01:58.072 ********** 2026-03-23 01:08:47.735781 | orchestrator | =============================================================================== 2026-03-23 01:08:47.735784 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 41.49s 2026-03-23 01:08:47.735787 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.05s 2026-03-23 01:08:47.735791 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 22.04s 2026-03-23 01:08:47.735794 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.31s 2026-03-23 01:08:47.735797 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.30s 2026-03-23 01:08:47.735800 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.21s 2026-03-23 01:08:47.735803 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.03s 2026-03-23 01:08:47.735806 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.85s 2026-03-23 01:08:47.735809 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.54s 2026-03-23 01:08:47.735812 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.47s 2026-03-23 01:08:47.735816 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.46s 2026-03-23 01:08:47.735819 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.32s 2026-03-23 01:08:47.735824 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.32s 2026-03-23 01:08:47.735827 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.24s 2026-03-23 01:08:47.735830 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.13s 2026-03-23 01:08:47.735833 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 1.00s 2026-03-23 01:08:47.735836 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.95s 2026-03-23 01:08:47.735839 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.90s 2026-03-23 01:08:47.735843 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.59s 2026-03-23 01:08:47.735846 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.54s 2026-03-23 01:08:47.735849 | orchestrator | 2026-03-23 01:08:47 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:47.735852 | orchestrator | 2026-03-23 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:50.785542 | orchestrator | 2026-03-23 01:08:50 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:50.787130 | orchestrator | 2026-03-23 01:08:50 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:50.787180 | orchestrator | 2026-03-23 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:53.826637 | orchestrator | 2026-03-23 01:08:53 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:53.828150 | orchestrator | 2026-03-23 01:08:53 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:53.828416 | orchestrator | 2026-03-23 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:56.869078 | orchestrator | 2026-03-23 01:08:56 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:56.870797 | orchestrator | 2026-03-23 01:08:56 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:56.870838 | orchestrator | 2026-03-23 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:08:59.916605 | orchestrator | 2026-03-23 01:08:59 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:08:59.918256 | orchestrator | 2026-03-23 01:08:59 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:08:59.918308 | orchestrator | 2026-03-23 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:02.965227 | orchestrator | 2026-03-23 01:09:02 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:09:02.967747 | orchestrator | 2026-03-23 01:09:02 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:02.967799 | orchestrator | 2026-03-23 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:06.004386 | orchestrator | 2026-03-23 01:09:06 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:09:06.005387 | orchestrator | 2026-03-23 01:09:06 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:06.005431 | orchestrator | 2026-03-23 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:09.044431 | orchestrator | 2026-03-23 01:09:09 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state STARTED 2026-03-23 01:09:09.046193 | orchestrator | 2026-03-23 01:09:09 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:09.046242 | orchestrator | 2026-03-23 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:12.090756 | orchestrator | 2026-03-23 01:09:12 | INFO  | Task efd2496d-b074-49c7-8fc2-615648d53563 is in state SUCCESS 2026-03-23 01:09:12.092499 | orchestrator | 2026-03-23 01:09:12.092559 | orchestrator | 2026-03-23 01:09:12.092566 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:09:12.092573 | orchestrator | 2026-03-23 01:09:12.092578 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-23 01:09:12.092584 | orchestrator | Monday 23 March 2026 01:00:25 +0000 (0:00:00.287) 0:00:00.287 ********** 2026-03-23 01:09:12.092590 | orchestrator | changed: [testbed-manager] 2026-03-23 01:09:12.092627 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.092633 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:09:12.092638 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:09:12.092644 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:09:12.092649 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:09:12.092654 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:09:12.092659 | orchestrator | 2026-03-23 01:09:12.092665 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:09:12.092670 | orchestrator | Monday 23 March 2026 01:00:26 +0000 (0:00:00.709) 0:00:00.997 ********** 2026-03-23 01:09:12.092675 | orchestrator | changed: [testbed-manager] 2026-03-23 01:09:12.092680 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.092685 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:09:12.092690 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:09:12.092722 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:09:12.092728 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:09:12.092733 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:09:12.092755 | orchestrator | 2026-03-23 01:09:12.092760 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:09:12.092765 | orchestrator | Monday 23 March 2026 01:00:27 +0000 (0:00:00.713) 0:00:01.710 ********** 2026-03-23 01:09:12.092771 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-23 01:09:12.092800 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-23 01:09:12.092806 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-23 01:09:12.092811 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-23 01:09:12.092817 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-23 01:09:12.092822 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-23 01:09:12.092827 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-23 01:09:12.092832 | orchestrator | 2026-03-23 01:09:12.092838 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-23 01:09:12.092843 | orchestrator | 2026-03-23 01:09:12.092848 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-23 01:09:12.092854 | orchestrator | Monday 23 March 2026 01:00:27 +0000 (0:00:00.750) 0:00:02.460 ********** 2026-03-23 01:09:12.092859 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:09:12.092979 | orchestrator | 2026-03-23 01:09:12.092988 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-23 01:09:12.092994 | orchestrator | Monday 23 March 2026 01:00:28 +0000 (0:00:00.753) 0:00:03.214 ********** 2026-03-23 01:09:12.092999 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-23 01:09:12.093005 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-23 01:09:12.093010 | orchestrator | 2026-03-23 01:09:12.093015 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-23 01:09:12.093020 | orchestrator | Monday 23 March 2026 01:00:33 +0000 (0:00:05.330) 0:00:08.545 ********** 2026-03-23 01:09:12.093026 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-23 01:09:12.093031 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-23 01:09:12.093068 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.093074 | orchestrator | 2026-03-23 01:09:12.093080 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-23 01:09:12.093086 | orchestrator | Monday 23 March 2026 01:00:38 +0000 (0:00:04.812) 0:00:13.357 ********** 2026-03-23 01:09:12.093092 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.093098 | orchestrator | 2026-03-23 01:09:12.093104 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-23 01:09:12.093110 | orchestrator | Monday 23 March 2026 01:00:39 +0000 (0:00:00.778) 0:00:14.135 ********** 2026-03-23 01:09:12.093116 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.093122 | orchestrator | 2026-03-23 01:09:12.093128 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-23 01:09:12.093134 | orchestrator | Monday 23 March 2026 01:00:41 +0000 (0:00:01.579) 0:00:15.715 ********** 2026-03-23 01:09:12.093140 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.093145 | orchestrator | 2026-03-23 01:09:12.093151 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-23 01:09:12.093157 | orchestrator | Monday 23 March 2026 01:00:44 +0000 (0:00:02.986) 0:00:18.702 ********** 2026-03-23 01:09:12.093163 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.093168 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.093174 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.093180 | orchestrator | 2026-03-23 01:09:12.093185 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-23 01:09:12.093191 | orchestrator | Monday 23 March 2026 01:00:44 +0000 (0:00:00.500) 0:00:19.202 ********** 2026-03-23 01:09:12.093215 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:09:12.093221 | orchestrator | 2026-03-23 01:09:12.093227 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-23 01:09:12.093233 | orchestrator | Monday 23 March 2026 01:01:18 +0000 (0:00:33.566) 0:00:52.769 ********** 2026-03-23 01:09:12.093239 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.093245 | orchestrator | 2026-03-23 01:09:12.093251 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-23 01:09:12.093257 | orchestrator | Monday 23 March 2026 01:01:32 +0000 (0:00:14.883) 0:01:07.652 ********** 2026-03-23 01:09:12.093263 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:09:12.093269 | orchestrator | 2026-03-23 01:09:12.093275 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-23 01:09:12.093280 | orchestrator | Monday 23 March 2026 01:01:47 +0000 (0:00:14.210) 0:01:21.863 ********** 2026-03-23 01:09:12.093301 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:09:12.093307 | orchestrator | 2026-03-23 01:09:12.093313 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-23 01:09:12.093341 | orchestrator | Monday 23 March 2026 01:01:48 +0000 (0:00:01.311) 0:01:23.174 ********** 2026-03-23 01:09:12.093347 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.093352 | orchestrator | 2026-03-23 01:09:12.093358 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-23 01:09:12.093364 | orchestrator | Monday 23 March 2026 01:01:48 +0000 (0:00:00.486) 0:01:23.660 ********** 2026-03-23 01:09:12.093370 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:09:12.093376 | orchestrator | 2026-03-23 01:09:12.093381 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-23 01:09:12.093387 | orchestrator | Monday 23 March 2026 01:01:49 +0000 (0:00:00.610) 0:01:24.271 ********** 2026-03-23 01:09:12.093393 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:09:12.093398 | orchestrator | 2026-03-23 01:09:12.093404 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-23 01:09:12.093422 | orchestrator | Monday 23 March 2026 01:02:07 +0000 (0:00:18.376) 0:01:42.647 ********** 2026-03-23 01:09:12.093427 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.093432 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.093437 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.093442 | orchestrator | 2026-03-23 01:09:12.093447 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-23 01:09:12.093452 | orchestrator | 2026-03-23 01:09:12.093457 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-23 01:09:12.093463 | orchestrator | Monday 23 March 2026 01:02:08 +0000 (0:00:00.321) 0:01:42.969 ********** 2026-03-23 01:09:12.093468 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:09:12.093473 | orchestrator | 2026-03-23 01:09:12.093478 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-23 01:09:12.093510 | orchestrator | Monday 23 March 2026 01:02:09 +0000 (0:00:00.701) 0:01:43.670 ********** 2026-03-23 01:09:12.093515 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.093520 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.093525 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.093565 | orchestrator | 2026-03-23 01:09:12.093572 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-23 01:09:12.093577 | orchestrator | Monday 23 March 2026 01:02:10 +0000 (0:00:01.866) 0:01:45.536 ********** 2026-03-23 01:09:12.093582 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.093587 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.093592 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.093597 | orchestrator | 2026-03-23 01:09:12.093602 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-23 01:09:12.093612 | orchestrator | Monday 23 March 2026 01:02:13 +0000 (0:00:02.163) 0:01:47.700 ********** 2026-03-23 01:09:12.093617 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.093622 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.093627 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.093631 | orchestrator | 2026-03-23 01:09:12.093636 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-23 01:09:12.093641 | orchestrator | Monday 23 March 2026 01:02:13 +0000 (0:00:00.787) 0:01:48.488 ********** 2026-03-23 01:09:12.093646 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-23 01:09:12.093651 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.093655 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-23 01:09:12.093660 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.093665 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-23 01:09:12.093670 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-23 01:09:12.093675 | orchestrator | 2026-03-23 01:09:12.093680 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-23 01:09:12.093685 | orchestrator | Monday 23 March 2026 01:02:21 +0000 (0:00:07.779) 0:01:56.267 ********** 2026-03-23 01:09:12.093690 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.093742 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.093749 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.093753 | orchestrator | 2026-03-23 01:09:12.093759 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-23 01:09:12.093764 | orchestrator | Monday 23 March 2026 01:02:21 +0000 (0:00:00.354) 0:01:56.621 ********** 2026-03-23 01:09:12.093769 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-23 01:09:12.093774 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.093779 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-23 01:09:12.093784 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.093789 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-23 01:09:12.093794 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.093799 | orchestrator | 2026-03-23 01:09:12.093804 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-23 01:09:12.093809 | orchestrator | Monday 23 March 2026 01:02:23 +0000 (0:00:01.467) 0:01:58.089 ********** 2026-03-23 01:09:12.093818 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.093823 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.093828 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.093834 | orchestrator | 2026-03-23 01:09:12.093839 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-23 01:09:12.093844 | orchestrator | Monday 23 March 2026 01:02:24 +0000 (0:00:00.876) 0:01:58.966 ********** 2026-03-23 01:09:12.093849 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.093854 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.093859 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.093864 | orchestrator | 2026-03-23 01:09:12.093882 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-23 01:09:12.093887 | orchestrator | Monday 23 March 2026 01:02:25 +0000 (0:00:01.063) 0:02:00.029 ********** 2026-03-23 01:09:12.093892 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.093896 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.093910 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.093915 | orchestrator | 2026-03-23 01:09:12.093920 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-23 01:09:12.093924 | orchestrator | Monday 23 March 2026 01:02:27 +0000 (0:00:02.339) 0:02:02.369 ********** 2026-03-23 01:09:12.093929 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.093934 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.093939 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:09:12.093944 | orchestrator | 2026-03-23 01:09:12.093948 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-23 01:09:12.093958 | orchestrator | Monday 23 March 2026 01:02:48 +0000 (0:00:20.901) 0:02:23.270 ********** 2026-03-23 01:09:12.093963 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.093968 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.093973 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:09:12.093977 | orchestrator | 2026-03-23 01:09:12.093982 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-23 01:09:12.093986 | orchestrator | Monday 23 March 2026 01:03:01 +0000 (0:00:13.172) 0:02:36.443 ********** 2026-03-23 01:09:12.093991 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:09:12.093996 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.094001 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.094005 | orchestrator | 2026-03-23 01:09:12.094010 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-23 01:09:12.094058 | orchestrator | Monday 23 March 2026 01:03:02 +0000 (0:00:00.740) 0:02:37.184 ********** 2026-03-23 01:09:12.094063 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.094068 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.094073 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.094077 | orchestrator | 2026-03-23 01:09:12.094083 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-23 01:09:12.094087 | orchestrator | Monday 23 March 2026 01:03:16 +0000 (0:00:13.782) 0:02:50.966 ********** 2026-03-23 01:09:12.094092 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.094097 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.094102 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.094107 | orchestrator | 2026-03-23 01:09:12.094112 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-23 01:09:12.094117 | orchestrator | Monday 23 March 2026 01:03:17 +0000 (0:00:01.314) 0:02:52.281 ********** 2026-03-23 01:09:12.094122 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.094126 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.094132 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.094137 | orchestrator | 2026-03-23 01:09:12.094141 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-23 01:09:12.094156 | orchestrator | 2026-03-23 01:09:12.094161 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-23 01:09:12.094166 | orchestrator | Monday 23 March 2026 01:03:18 +0000 (0:00:00.426) 0:02:52.707 ********** 2026-03-23 01:09:12.094171 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:09:12.094178 | orchestrator | 2026-03-23 01:09:12.094182 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-23 01:09:12.094188 | orchestrator | Monday 23 March 2026 01:03:19 +0000 (0:00:01.095) 0:02:53.803 ********** 2026-03-23 01:09:12.094193 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-23 01:09:12.094198 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-23 01:09:12.094203 | orchestrator | 2026-03-23 01:09:12.094208 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-23 01:09:12.094214 | orchestrator | Monday 23 March 2026 01:03:23 +0000 (0:00:04.033) 0:02:57.837 ********** 2026-03-23 01:09:12.094219 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-23 01:09:12.094226 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-23 01:09:12.094231 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-23 01:09:12.094237 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-23 01:09:12.094242 | orchestrator | 2026-03-23 01:09:12.094248 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-23 01:09:12.094258 | orchestrator | Monday 23 March 2026 01:03:29 +0000 (0:00:06.362) 0:03:04.200 ********** 2026-03-23 01:09:12.094264 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-23 01:09:12.094269 | orchestrator | 2026-03-23 01:09:12.094274 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-23 01:09:12.094279 | orchestrator | Monday 23 March 2026 01:03:32 +0000 (0:00:03.403) 0:03:07.604 ********** 2026-03-23 01:09:12.094284 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-23 01:09:12.094315 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-23 01:09:12.094320 | orchestrator | 2026-03-23 01:09:12.094329 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-23 01:09:12.094335 | orchestrator | Monday 23 March 2026 01:03:36 +0000 (0:00:03.762) 0:03:11.366 ********** 2026-03-23 01:09:12.094340 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-23 01:09:12.094369 | orchestrator | 2026-03-23 01:09:12.094374 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-23 01:09:12.094380 | orchestrator | Monday 23 March 2026 01:03:40 +0000 (0:00:04.047) 0:03:15.414 ********** 2026-03-23 01:09:12.094385 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-23 01:09:12.094390 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-23 01:09:12.094395 | orchestrator | 2026-03-23 01:09:12.094400 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-23 01:09:12.094412 | orchestrator | Monday 23 March 2026 01:03:48 +0000 (0:00:07.580) 0:03:22.995 ********** 2026-03-23 01:09:12.094421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.094429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.094436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.094454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.094460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.094466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.094472 | orchestrator | 2026-03-23 01:09:12.094477 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-23 01:09:12.094482 | orchestrator | Monday 23 March 2026 01:03:51 +0000 (0:00:03.343) 0:03:26.338 ********** 2026-03-23 01:09:12.094487 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.094492 | orchestrator | 2026-03-23 01:09:12.094497 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-23 01:09:12.094501 | orchestrator | Monday 23 March 2026 01:03:51 +0000 (0:00:00.197) 0:03:26.535 ********** 2026-03-23 01:09:12.094506 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.094511 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.094516 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.094521 | orchestrator | 2026-03-23 01:09:12.094526 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-23 01:09:12.094531 | orchestrator | Monday 23 March 2026 01:03:52 +0000 (0:00:00.601) 0:03:27.136 ********** 2026-03-23 01:09:12.094539 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-23 01:09:12.094544 | orchestrator | 2026-03-23 01:09:12.094548 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-23 01:09:12.094553 | orchestrator | Monday 23 March 2026 01:03:53 +0000 (0:00:01.052) 0:03:28.189 ********** 2026-03-23 01:09:12.094558 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.094563 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.094567 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.094572 | orchestrator | 2026-03-23 01:09:12.094577 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-23 01:09:12.094582 | orchestrator | Monday 23 March 2026 01:03:53 +0000 (0:00:00.295) 0:03:28.484 ********** 2026-03-23 01:09:12.094587 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:09:12.094592 | orchestrator | 2026-03-23 01:09:12.094597 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-23 01:09:12.094601 | orchestrator | Monday 23 March 2026 01:03:54 +0000 (0:00:00.933) 0:03:29.418 ********** 2026-03-23 01:09:12.094610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.094620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.094626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.094635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.094643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.094658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.094664 | orchestrator | 2026-03-23 01:09:12.094670 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-23 01:09:12.094675 | orchestrator | Monday 23 March 2026 01:03:57 +0000 (0:00:02.794) 0:03:32.212 ********** 2026-03-23 01:09:12.094680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-23 01:09:12.094693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.094699 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.094705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-23 01:09:12.094724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-23 01:09:12.094730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.094735 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.094741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.094751 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.094756 | orchestrator | 2026-03-23 01:09:12.094761 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-23 01:09:12.094767 | orchestrator | Monday 23 March 2026 01:03:58 +0000 (0:00:01.021) 0:03:33.233 ********** 2026-03-23 01:09:12.094772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-23 01:09:12.094781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.094787 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.094797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-23 01:09:12.094808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.094813 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.094819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-23 01:09:12.094825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.094830 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.094835 | orchestrator | 2026-03-23 01:09:12.094840 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-23 01:09:12.094846 | orchestrator | Monday 23 March 2026 01:03:59 +0000 (0:00:01.163) 0:03:34.397 ********** 2026-03-23 01:09:12.094857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.094902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.094913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.094922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.094933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.094938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.094948 | orchestrator | 2026-03-23 01:09:12.094953 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-23 01:09:12.094958 | orchestrator | Monday 23 March 2026 01:04:03 +0000 (0:00:03.325) 0:03:37.723 ********** 2026-03-23 01:09:12.094964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.094969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.094981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.094990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095025 | orchestrator | 2026-03-23 01:09:12.095030 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-23 01:09:12.095035 | orchestrator | Monday 23 March 2026 01:04:12 +0000 (0:00:09.371) 0:03:47.095 ********** 2026-03-23 01:09:12.095043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-23 01:09:12.095053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.095062 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.095067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-23 01:09:12.095073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.095078 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.095084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-23 01:09:12.095093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.095098 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.095103 | orchestrator | 2026-03-23 01:09:12.095108 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-23 01:09:12.095117 | orchestrator | Monday 23 March 2026 01:04:13 +0000 (0:00:00.754) 0:03:47.850 ********** 2026-03-23 01:09:12.095123 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.095128 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:09:12.095134 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:09:12.095139 | orchestrator | 2026-03-23 01:09:12.095148 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-23 01:09:12.095153 | orchestrator | Monday 23 March 2026 01:04:16 +0000 (0:00:02.849) 0:03:50.699 ********** 2026-03-23 01:09:12.095158 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.095163 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.095168 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.095172 | orchestrator | 2026-03-23 01:09:12.095177 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-23 01:09:12.095182 | orchestrator | Monday 23 March 2026 01:04:16 +0000 (0:00:00.766) 0:03:51.466 ********** 2026-03-23 01:09:12.095187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.095193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.095207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-23 01:09:12.095217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095234 | orchestrator | 2026-03-23 01:09:12.095239 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-23 01:09:12.095245 | orchestrator | Monday 23 March 2026 01:04:19 +0000 (0:00:02.825) 0:03:54.291 ********** 2026-03-23 01:09:12.095250 | orchestrator | 2026-03-23 01:09:12.095255 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-23 01:09:12.095260 | orchestrator | Monday 23 March 2026 01:04:19 +0000 (0:00:00.136) 0:03:54.428 ********** 2026-03-23 01:09:12.095265 | orchestrator | 2026-03-23 01:09:12.095270 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-23 01:09:12.095276 | orchestrator | Monday 23 March 2026 01:04:19 +0000 (0:00:00.136) 0:03:54.565 ********** 2026-03-23 01:09:12.095281 | orchestrator | 2026-03-23 01:09:12.095286 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-23 01:09:12.095291 | orchestrator | Monday 23 March 2026 01:04:20 +0000 (0:00:00.237) 0:03:54.802 ********** 2026-03-23 01:09:12.095296 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.095301 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:09:12.095307 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:09:12.095312 | orchestrator | 2026-03-23 01:09:12.095317 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-23 01:09:12.095322 | orchestrator | Monday 23 March 2026 01:04:35 +0000 (0:00:15.238) 0:04:10.040 ********** 2026-03-23 01:09:12.095327 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.095333 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:09:12.095342 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:09:12.095347 | orchestrator | 2026-03-23 01:09:12.095352 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-23 01:09:12.095358 | orchestrator | 2026-03-23 01:09:12.095363 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-23 01:09:12.095368 | orchestrator | Monday 23 March 2026 01:04:45 +0000 (0:00:10.135) 0:04:20.176 ********** 2026-03-23 01:09:12.095375 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:09:12.095380 | orchestrator | 2026-03-23 01:09:12.095386 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-23 01:09:12.095391 | orchestrator | Monday 23 March 2026 01:04:46 +0000 (0:00:01.195) 0:04:21.371 ********** 2026-03-23 01:09:12.095396 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.095401 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.095406 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.095415 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.095420 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.095426 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.095431 | orchestrator | 2026-03-23 01:09:12.095436 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-23 01:09:12.095442 | orchestrator | Monday 23 March 2026 01:04:47 +0000 (0:00:01.223) 0:04:22.595 ********** 2026-03-23 01:09:12.095447 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.095452 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.095457 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.095463 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 01:09:12.095468 | orchestrator | 2026-03-23 01:09:12.095473 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-23 01:09:12.095482 | orchestrator | Monday 23 March 2026 01:04:49 +0000 (0:00:01.286) 0:04:23.882 ********** 2026-03-23 01:09:12.095487 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-23 01:09:12.095493 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-23 01:09:12.095498 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-23 01:09:12.095503 | orchestrator | 2026-03-23 01:09:12.095509 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-23 01:09:12.095514 | orchestrator | Monday 23 March 2026 01:04:50 +0000 (0:00:01.320) 0:04:25.202 ********** 2026-03-23 01:09:12.095519 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-23 01:09:12.095525 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-23 01:09:12.095530 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-23 01:09:12.095535 | orchestrator | 2026-03-23 01:09:12.095540 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-23 01:09:12.095545 | orchestrator | Monday 23 March 2026 01:04:51 +0000 (0:00:01.068) 0:04:26.271 ********** 2026-03-23 01:09:12.095550 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-23 01:09:12.095555 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.095560 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-23 01:09:12.095565 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.095570 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-23 01:09:12.095575 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.095580 | orchestrator | 2026-03-23 01:09:12.095586 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-23 01:09:12.095591 | orchestrator | Monday 23 March 2026 01:04:52 +0000 (0:00:00.479) 0:04:26.750 ********** 2026-03-23 01:09:12.095596 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-23 01:09:12.095601 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-23 01:09:12.095609 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.095614 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-23 01:09:12.095620 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-23 01:09:12.095625 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.095630 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-23 01:09:12.095636 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-23 01:09:12.095640 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-23 01:09:12.095645 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-23 01:09:12.095650 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.095655 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-23 01:09:12.095660 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-23 01:09:12.095665 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-23 01:09:12.095670 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-23 01:09:12.095675 | orchestrator | 2026-03-23 01:09:12.095680 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-23 01:09:12.095685 | orchestrator | Monday 23 March 2026 01:04:54 +0000 (0:00:02.165) 0:04:28.916 ********** 2026-03-23 01:09:12.095691 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.095695 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.095701 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.095705 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:09:12.095711 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:09:12.095716 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:09:12.095721 | orchestrator | 2026-03-23 01:09:12.095726 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-23 01:09:12.095731 | orchestrator | Monday 23 March 2026 01:04:55 +0000 (0:00:01.013) 0:04:29.929 ********** 2026-03-23 01:09:12.095736 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.095741 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.095746 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.095752 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:09:12.095757 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:09:12.095762 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:09:12.095767 | orchestrator | 2026-03-23 01:09:12.095771 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-23 01:09:12.095776 | orchestrator | Monday 23 March 2026 01:04:56 +0000 (0:00:01.657) 0:04:31.587 ********** 2026-03-23 01:09:12.095785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095818 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095856 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095862 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095917 | orchestrator | 2026-03-23 01:09:12.095922 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-23 01:09:12.095927 | orchestrator | Monday 23 March 2026 01:04:59 +0000 (0:00:02.084) 0:04:33.672 ********** 2026-03-23 01:09:12.095932 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:09:12.095938 | orchestrator | 2026-03-23 01:09:12.095943 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-23 01:09:12.095948 | orchestrator | Monday 23 March 2026 01:05:00 +0000 (0:00:01.233) 0:04:34.905 ********** 2026-03-23 01:09:12.095953 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095981 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.095997 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-23 01:09:12.096003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-23 01:09:12.096011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.096023 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-23 01:09:12.096029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.096034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.096039 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.096045 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.096054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.096063 | orchestrator | 2026-03-23 01:09:12.096068 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-23 01:09:12.096073 | orchestrator | Monday 23 March 2026 01:05:04 +0000 (0:00:04.223) 0:04:39.129 ********** 2026-03-23 01:09:12.096082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-23 01:09:12.096088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-23 01:09:12.096093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.096099 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.096104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-23 01:09:12.096110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-23 01:09:12.096123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.096129 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.096135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-23 01:09:12.096140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-23 01:09:12.096145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.096151 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.096156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-23 01:09:12.096168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.096174 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.096258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-23 01:09:12.096265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.096270 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.096275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-23 01:09:12.096281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.096286 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.096291 | orchestrator | 2026-03-23 01:09:12.096297 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-23 01:09:12.096302 | orchestrator | Monday 23 March 2026 01:05:05 +0000 (0:00:01.354) 0:04:40.484 ********** 2026-03-23 01:09:12.096308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-23 01:09:12.096321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-23 01:09:12.096330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.096336 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.096341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-23 01:09:12.096346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-23 01:09:12.096352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.096360 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.096365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-23 01:09:12.096373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-23 01:09:12.096382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.096387 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.096392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-23 01:09:12.096398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.096403 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.096408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-23 01:09:12.096417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.096423 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.096430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-23 01:09:12.096438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.096443 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.096448 | orchestrator | 2026-03-23 01:09:12.096454 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-23 01:09:12.096459 | orchestrator | Monday 23 March 2026 01:05:07 +0000 (0:00:01.759) 0:04:42.243 ********** 2026-03-23 01:09:12.096464 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.096469 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.096475 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.096480 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 01:09:12.096485 | orchestrator | 2026-03-23 01:09:12.096490 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-23 01:09:12.096495 | orchestrator | Monday 23 March 2026 01:05:08 +0000 (0:00:00.901) 0:04:43.145 ********** 2026-03-23 01:09:12.096500 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-23 01:09:12.096505 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-23 01:09:12.096510 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-23 01:09:12.096515 | orchestrator | 2026-03-23 01:09:12.096520 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-23 01:09:12.096525 | orchestrator | Monday 23 March 2026 01:05:09 +0000 (0:00:01.344) 0:04:44.490 ********** 2026-03-23 01:09:12.096530 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-23 01:09:12.096536 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-23 01:09:12.096541 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-23 01:09:12.096545 | orchestrator | 2026-03-23 01:09:12.096551 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-23 01:09:12.096560 | orchestrator | Monday 23 March 2026 01:05:11 +0000 (0:00:01.334) 0:04:45.824 ********** 2026-03-23 01:09:12.096564 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:09:12.096570 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:09:12.096575 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:09:12.096581 | orchestrator | 2026-03-23 01:09:12.096586 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-23 01:09:12.096591 | orchestrator | Monday 23 March 2026 01:05:11 +0000 (0:00:00.495) 0:04:46.319 ********** 2026-03-23 01:09:12.096596 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:09:12.096601 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:09:12.096607 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:09:12.096612 | orchestrator | 2026-03-23 01:09:12.096617 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-23 01:09:12.096622 | orchestrator | Monday 23 March 2026 01:05:12 +0000 (0:00:00.494) 0:04:46.814 ********** 2026-03-23 01:09:12.096627 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-23 01:09:12.096633 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-23 01:09:12.096638 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-23 01:09:12.096643 | orchestrator | 2026-03-23 01:09:12.096648 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-23 01:09:12.096653 | orchestrator | Monday 23 March 2026 01:05:13 +0000 (0:00:01.015) 0:04:47.829 ********** 2026-03-23 01:09:12.096658 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-23 01:09:12.096663 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-23 01:09:12.096668 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-23 01:09:12.096673 | orchestrator | 2026-03-23 01:09:12.096678 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-23 01:09:12.096683 | orchestrator | Monday 23 March 2026 01:05:14 +0000 (0:00:01.364) 0:04:49.194 ********** 2026-03-23 01:09:12.096701 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-23 01:09:12.096712 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-23 01:09:12.096717 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-23 01:09:12.096722 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-23 01:09:12.096728 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-23 01:09:12.096733 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-23 01:09:12.096739 | orchestrator | 2026-03-23 01:09:12.096744 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-23 01:09:12.096749 | orchestrator | Monday 23 March 2026 01:05:18 +0000 (0:00:04.197) 0:04:53.391 ********** 2026-03-23 01:09:12.096754 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.096759 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.096764 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.096769 | orchestrator | 2026-03-23 01:09:12.096775 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-23 01:09:12.096781 | orchestrator | Monday 23 March 2026 01:05:18 +0000 (0:00:00.262) 0:04:53.653 ********** 2026-03-23 01:09:12.096786 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.096794 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.096800 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.096805 | orchestrator | 2026-03-23 01:09:12.096810 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-23 01:09:12.096815 | orchestrator | Monday 23 March 2026 01:05:19 +0000 (0:00:00.253) 0:04:53.907 ********** 2026-03-23 01:09:12.096821 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:09:12.096827 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:09:12.096830 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:09:12.096835 | orchestrator | 2026-03-23 01:09:12.096840 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-23 01:09:12.096845 | orchestrator | Monday 23 March 2026 01:05:20 +0000 (0:00:01.382) 0:04:55.289 ********** 2026-03-23 01:09:12.096861 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-23 01:09:12.096881 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-23 01:09:12.096886 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-23 01:09:12.096891 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-23 01:09:12.096896 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-23 01:09:12.096901 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-23 01:09:12.096906 | orchestrator | 2026-03-23 01:09:12.096911 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-23 01:09:12.096915 | orchestrator | Monday 23 March 2026 01:05:23 +0000 (0:00:02.774) 0:04:58.064 ********** 2026-03-23 01:09:12.096920 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-23 01:09:12.096925 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-23 01:09:12.096930 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-23 01:09:12.096982 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-23 01:09:12.096992 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:09:12.096998 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-23 01:09:12.097004 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:09:12.097009 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-23 01:09:12.097015 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:09:12.097020 | orchestrator | 2026-03-23 01:09:12.097025 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-23 01:09:12.097031 | orchestrator | Monday 23 March 2026 01:05:26 +0000 (0:00:03.312) 0:05:01.376 ********** 2026-03-23 01:09:12.097037 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.097042 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.097048 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.097053 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-23 01:09:12.097059 | orchestrator | 2026-03-23 01:09:12.097065 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-23 01:09:12.097071 | orchestrator | Monday 23 March 2026 01:05:29 +0000 (0:00:02.806) 0:05:04.183 ********** 2026-03-23 01:09:12.097077 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-23 01:09:12.097083 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-23 01:09:12.097089 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-23 01:09:12.097095 | orchestrator | 2026-03-23 01:09:12.097100 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-23 01:09:12.097106 | orchestrator | Monday 23 March 2026 01:05:31 +0000 (0:00:02.087) 0:05:06.270 ********** 2026-03-23 01:09:12.097111 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.097117 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.097123 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.097128 | orchestrator | 2026-03-23 01:09:12.097134 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-23 01:09:12.097140 | orchestrator | Monday 23 March 2026 01:05:32 +0000 (0:00:00.413) 0:05:06.684 ********** 2026-03-23 01:09:12.097146 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.097152 | orchestrator | 2026-03-23 01:09:12.097158 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-23 01:09:12.097164 | orchestrator | Monday 23 March 2026 01:05:32 +0000 (0:00:00.190) 0:05:06.874 ********** 2026-03-23 01:09:12.097178 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.097184 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.097189 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.097195 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.097200 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.097205 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.097210 | orchestrator | 2026-03-23 01:09:12.097215 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-23 01:09:12.097220 | orchestrator | Monday 23 March 2026 01:05:33 +0000 (0:00:00.894) 0:05:07.769 ********** 2026-03-23 01:09:12.097225 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-23 01:09:12.097230 | orchestrator | 2026-03-23 01:09:12.097236 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-23 01:09:12.097241 | orchestrator | Monday 23 March 2026 01:05:33 +0000 (0:00:00.731) 0:05:08.501 ********** 2026-03-23 01:09:12.097246 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.097252 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.097258 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.097263 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.097268 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.097278 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.097284 | orchestrator | 2026-03-23 01:09:12.097290 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-23 01:09:12.097296 | orchestrator | Monday 23 March 2026 01:05:34 +0000 (0:00:00.432) 0:05:08.933 ********** 2026-03-23 01:09:12.097309 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097324 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097334 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097376 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097408 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097414 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097425 | orchestrator | 2026-03-23 01:09:12.097431 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-23 01:09:12.097437 | orchestrator | Monday 23 March 2026 01:05:37 +0000 (0:00:03.407) 0:05:12.341 ********** 2026-03-23 01:09:12.097444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-23 01:09:12.097450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-23 01:09:12.097458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-23 01:09:12.097468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-23 01:09:12.097474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-23 01:09:12.097483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-23 01:09:12.097490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097512 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.097552 | orchestrator | 2026-03-23 01:09:12.097558 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-23 01:09:12.097563 | orchestrator | Monday 23 March 2026 01:05:43 +0000 (0:00:05.436) 0:05:17.777 ********** 2026-03-23 01:09:12.097569 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.097574 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.097580 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.097585 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.097593 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.097599 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.097604 | orchestrator | 2026-03-23 01:09:12.097609 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-23 01:09:12.097615 | orchestrator | Monday 23 March 2026 01:05:44 +0000 (0:00:01.353) 0:05:19.131 ********** 2026-03-23 01:09:12.097620 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-23 01:09:12.097626 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-23 01:09:12.097631 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-23 01:09:12.097636 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-23 01:09:12.097647 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-23 01:09:12.097653 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-23 01:09:12.097659 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.097664 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-23 01:09:12.097670 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-23 01:09:12.097675 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.097680 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-23 01:09:12.097685 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.097690 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-23 01:09:12.097695 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-23 01:09:12.097701 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-23 01:09:12.097706 | orchestrator | 2026-03-23 01:09:12.097712 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-23 01:09:12.097717 | orchestrator | Monday 23 March 2026 01:05:47 +0000 (0:00:03.471) 0:05:22.603 ********** 2026-03-23 01:09:12.097722 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.097728 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.097733 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.097739 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.097744 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.097749 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.097754 | orchestrator | 2026-03-23 01:09:12.097759 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-23 01:09:12.097764 | orchestrator | Monday 23 March 2026 01:05:48 +0000 (0:00:00.624) 0:05:23.227 ********** 2026-03-23 01:09:12.097770 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-23 01:09:12.097776 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-23 01:09:12.097781 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-23 01:09:12.097787 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-23 01:09:12.097792 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-23 01:09:12.097798 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-23 01:09:12.097803 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-23 01:09:12.097809 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-23 01:09:12.097814 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-23 01:09:12.097820 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-23 01:09:12.097825 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.097830 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-23 01:09:12.097835 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.097843 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-23 01:09:12.097848 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.097857 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-23 01:09:12.097863 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-23 01:09:12.097898 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-23 01:09:12.097904 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-23 01:09:12.097912 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-23 01:09:12.097918 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-23 01:09:12.097923 | orchestrator | 2026-03-23 01:09:12.097928 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-23 01:09:12.097934 | orchestrator | Monday 23 March 2026 01:05:53 +0000 (0:00:04.873) 0:05:28.100 ********** 2026-03-23 01:09:12.097939 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-23 01:09:12.097945 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-23 01:09:12.097950 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-23 01:09:12.097956 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-23 01:09:12.097961 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-23 01:09:12.097967 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-23 01:09:12.097972 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-23 01:09:12.097978 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-23 01:09:12.097984 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-23 01:09:12.097989 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-23 01:09:12.097995 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-23 01:09:12.098000 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.098006 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-23 01:09:12.098011 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-23 01:09:12.098045 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-23 01:09:12.098050 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-23 01:09:12.098056 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.098061 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-23 01:09:12.098067 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-23 01:09:12.098073 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-23 01:09:12.098078 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.098084 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-23 01:09:12.098090 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-23 01:09:12.098096 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-23 01:09:12.098102 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-23 01:09:12.098108 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-23 01:09:12.098120 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-23 01:09:12.098126 | orchestrator | 2026-03-23 01:09:12.098132 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-23 01:09:12.098138 | orchestrator | Monday 23 March 2026 01:06:01 +0000 (0:00:07.933) 0:05:36.034 ********** 2026-03-23 01:09:12.098144 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.098150 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.098155 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.098161 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.098167 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.098173 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.098179 | orchestrator | 2026-03-23 01:09:12.098185 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-23 01:09:12.098191 | orchestrator | Monday 23 March 2026 01:06:01 +0000 (0:00:00.482) 0:05:36.517 ********** 2026-03-23 01:09:12.098197 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.098202 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.098208 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.098214 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.098220 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.098225 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.098231 | orchestrator | 2026-03-23 01:09:12.098239 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-23 01:09:12.098245 | orchestrator | Monday 23 March 2026 01:06:02 +0000 (0:00:00.633) 0:05:37.151 ********** 2026-03-23 01:09:12.098250 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.098256 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.098261 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.098267 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:09:12.098272 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:09:12.098278 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:09:12.098283 | orchestrator | 2026-03-23 01:09:12.098289 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-03-23 01:09:12.098295 | orchestrator | Monday 23 March 2026 01:06:04 +0000 (0:00:02.074) 0:05:39.225 ********** 2026-03-23 01:09:12.098300 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.098310 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.098316 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.098322 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:09:12.098329 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:09:12.098335 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:09:12.098341 | orchestrator | 2026-03-23 01:09:12.098346 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-23 01:09:12.098352 | orchestrator | Monday 23 March 2026 01:06:06 +0000 (0:00:02.410) 0:05:41.635 ********** 2026-03-23 01:09:12.098359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-23 01:09:12.098366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-23 01:09:12.098377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.098383 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.098389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-23 01:09:12.098398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-23 01:09:12.098408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.098415 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.098421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-23 01:09:12.098432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-23 01:09:12.098438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.098445 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.098454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-23 01:09:12.098463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-23 01:09:12.098469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.098475 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.098481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.098490 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.098496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-23 01:09:12.098501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-23 01:09:12.098507 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.098513 | orchestrator | 2026-03-23 01:09:12.098518 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-23 01:09:12.098524 | orchestrator | Monday 23 March 2026 01:06:08 +0000 (0:00:01.214) 0:05:42.850 ********** 2026-03-23 01:09:12.098530 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-23 01:09:12.098535 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-23 01:09:12.098541 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.098547 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-23 01:09:12.098553 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-23 01:09:12.098559 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.098565 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-23 01:09:12.098571 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-23 01:09:12.098577 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.098583 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-23 01:09:12.098588 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-23 01:09:12.098593 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.098599 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-23 01:09:12.098607 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-23 01:09:12.098612 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.098617 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-23 01:09:12.098622 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-23 01:09:12.098628 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.098633 | orchestrator | 2026-03-23 01:09:12.098639 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-23 01:09:12.098645 | orchestrator | Monday 23 March 2026 01:06:08 +0000 (0:00:00.705) 0:05:43.556 ********** 2026-03-23 01:09:12.098685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098821 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098840 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098859 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098865 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-23 01:09:12.098886 | orchestrator | 2026-03-23 01:09:12.098892 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-23 01:09:12.098897 | orchestrator | Monday 23 March 2026 01:06:11 +0000 (0:00:02.889) 0:05:46.445 ********** 2026-03-23 01:09:12.098902 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.098908 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.098913 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.098919 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.098924 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.098930 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.098935 | orchestrator | 2026-03-23 01:09:12.098940 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-23 01:09:12.098946 | orchestrator | Monday 23 March 2026 01:06:12 +0000 (0:00:00.638) 0:05:47.084 ********** 2026-03-23 01:09:12.098951 | orchestrator | 2026-03-23 01:09:12.098956 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-23 01:09:12.098962 | orchestrator | Monday 23 March 2026 01:06:12 +0000 (0:00:00.121) 0:05:47.205 ********** 2026-03-23 01:09:12.098967 | orchestrator | 2026-03-23 01:09:12.098972 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-23 01:09:12.098978 | orchestrator | Monday 23 March 2026 01:06:12 +0000 (0:00:00.120) 0:05:47.326 ********** 2026-03-23 01:09:12.098983 | orchestrator | 2026-03-23 01:09:12.098988 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-23 01:09:12.098994 | orchestrator | Monday 23 March 2026 01:06:12 +0000 (0:00:00.122) 0:05:47.449 ********** 2026-03-23 01:09:12.098999 | orchestrator | 2026-03-23 01:09:12.099005 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-23 01:09:12.099010 | orchestrator | Monday 23 March 2026 01:06:12 +0000 (0:00:00.123) 0:05:47.572 ********** 2026-03-23 01:09:12.099015 | orchestrator | 2026-03-23 01:09:12.099024 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-23 01:09:12.099030 | orchestrator | Monday 23 March 2026 01:06:13 +0000 (0:00:00.216) 0:05:47.789 ********** 2026-03-23 01:09:12.099035 | orchestrator | 2026-03-23 01:09:12.099040 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-23 01:09:12.099046 | orchestrator | Monday 23 March 2026 01:06:13 +0000 (0:00:00.118) 0:05:47.908 ********** 2026-03-23 01:09:12.099051 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.099057 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:09:12.099062 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:09:12.099067 | orchestrator | 2026-03-23 01:09:12.099075 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-23 01:09:12.099080 | orchestrator | Monday 23 March 2026 01:06:24 +0000 (0:00:11.301) 0:05:59.209 ********** 2026-03-23 01:09:12.099085 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.099090 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:09:12.099095 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:09:12.099100 | orchestrator | 2026-03-23 01:09:12.099105 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-23 01:09:12.099111 | orchestrator | Monday 23 March 2026 01:06:41 +0000 (0:00:17.208) 0:06:16.418 ********** 2026-03-23 01:09:12.099116 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:09:12.099121 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:09:12.099126 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:09:12.099131 | orchestrator | 2026-03-23 01:09:12.099139 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-23 01:09:12.099144 | orchestrator | Monday 23 March 2026 01:07:01 +0000 (0:00:19.589) 0:06:36.008 ********** 2026-03-23 01:09:12.099150 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:09:12.099155 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:09:12.099160 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:09:12.099165 | orchestrator | 2026-03-23 01:09:12.099171 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-23 01:09:12.099176 | orchestrator | Monday 23 March 2026 01:07:31 +0000 (0:00:30.442) 0:07:06.450 ********** 2026-03-23 01:09:12.099182 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-23 01:09:12.099187 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-03-23 01:09:12.099193 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-23 01:09:12.099198 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:09:12.099204 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:09:12.099208 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:09:12.099213 | orchestrator | 2026-03-23 01:09:12.099218 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-23 01:09:12.099224 | orchestrator | Monday 23 March 2026 01:07:37 +0000 (0:00:06.143) 0:07:12.594 ********** 2026-03-23 01:09:12.099229 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:09:12.099234 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:09:12.099240 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:09:12.099245 | orchestrator | 2026-03-23 01:09:12.099251 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-23 01:09:12.099257 | orchestrator | Monday 23 March 2026 01:07:38 +0000 (0:00:00.672) 0:07:13.266 ********** 2026-03-23 01:09:12.099262 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:09:12.099268 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:09:12.099273 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:09:12.099279 | orchestrator | 2026-03-23 01:09:12.099285 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-23 01:09:12.099291 | orchestrator | Monday 23 March 2026 01:08:01 +0000 (0:00:22.520) 0:07:35.786 ********** 2026-03-23 01:09:12.099296 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.099309 | orchestrator | 2026-03-23 01:09:12.099314 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-23 01:09:12.099320 | orchestrator | Monday 23 March 2026 01:08:01 +0000 (0:00:00.221) 0:07:36.008 ********** 2026-03-23 01:09:12.099326 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.099332 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.099338 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.099344 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.099349 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.099355 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-23 01:09:12.099361 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-23 01:09:12.099366 | orchestrator | 2026-03-23 01:09:12.099372 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-23 01:09:12.099377 | orchestrator | Monday 23 March 2026 01:08:22 +0000 (0:00:20.941) 0:07:56.949 ********** 2026-03-23 01:09:12.099383 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.099388 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.099394 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.099399 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.099404 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.099409 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.099415 | orchestrator | 2026-03-23 01:09:12.099421 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-23 01:09:12.099426 | orchestrator | Monday 23 March 2026 01:08:31 +0000 (0:00:08.960) 0:08:05.910 ********** 2026-03-23 01:09:12.099432 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.099438 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.099443 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.099449 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.099455 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.099461 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-23 01:09:12.099466 | orchestrator | 2026-03-23 01:09:12.099472 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-23 01:09:12.099478 | orchestrator | Monday 23 March 2026 01:08:34 +0000 (0:00:03.682) 0:08:09.592 ********** 2026-03-23 01:09:12.099483 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-23 01:09:12.099488 | orchestrator | 2026-03-23 01:09:12.099493 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-23 01:09:12.099499 | orchestrator | Monday 23 March 2026 01:08:48 +0000 (0:00:13.779) 0:08:23.372 ********** 2026-03-23 01:09:12.099505 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-23 01:09:12.099511 | orchestrator | 2026-03-23 01:09:12.099517 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-23 01:09:12.099525 | orchestrator | Monday 23 March 2026 01:08:49 +0000 (0:00:01.233) 0:08:24.606 ********** 2026-03-23 01:09:12.099531 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.099536 | orchestrator | 2026-03-23 01:09:12.099542 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-23 01:09:12.099548 | orchestrator | Monday 23 March 2026 01:08:51 +0000 (0:00:01.356) 0:08:25.963 ********** 2026-03-23 01:09:12.099553 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-23 01:09:12.099559 | orchestrator | 2026-03-23 01:09:12.099564 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-23 01:09:12.099569 | orchestrator | Monday 23 March 2026 01:09:02 +0000 (0:00:11.232) 0:08:37.195 ********** 2026-03-23 01:09:12.099575 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:09:12.099581 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:09:12.099586 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:09:12.099595 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:09:12.099600 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:09:12.099610 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:09:12.099615 | orchestrator | 2026-03-23 01:09:12.099621 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-23 01:09:12.099626 | orchestrator | 2026-03-23 01:09:12.099632 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-23 01:09:12.099637 | orchestrator | Monday 23 March 2026 01:09:04 +0000 (0:00:01.656) 0:08:38.851 ********** 2026-03-23 01:09:12.099643 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:09:12.099649 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:09:12.099654 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:09:12.099660 | orchestrator | 2026-03-23 01:09:12.099665 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-23 01:09:12.099670 | orchestrator | 2026-03-23 01:09:12.099676 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-23 01:09:12.099681 | orchestrator | Monday 23 March 2026 01:09:05 +0000 (0:00:01.073) 0:08:39.925 ********** 2026-03-23 01:09:12.099687 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.099693 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.099699 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.099704 | orchestrator | 2026-03-23 01:09:12.099710 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-23 01:09:12.099715 | orchestrator | 2026-03-23 01:09:12.099721 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-23 01:09:12.099726 | orchestrator | Monday 23 March 2026 01:09:05 +0000 (0:00:00.479) 0:08:40.404 ********** 2026-03-23 01:09:12.099732 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-23 01:09:12.099737 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-23 01:09:12.099743 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-23 01:09:12.099748 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-23 01:09:12.099753 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-23 01:09:12.099758 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-23 01:09:12.099763 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:09:12.099768 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-23 01:09:12.099773 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-23 01:09:12.099778 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-23 01:09:12.099784 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-23 01:09:12.099789 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-23 01:09:12.099795 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-23 01:09:12.099800 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:09:12.099805 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-23 01:09:12.099811 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-23 01:09:12.099817 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-23 01:09:12.099822 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-23 01:09:12.099828 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-23 01:09:12.099833 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-23 01:09:12.099839 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:09:12.099844 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-23 01:09:12.099849 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-23 01:09:12.099854 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-23 01:09:12.099860 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-23 01:09:12.099865 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-23 01:09:12.099902 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-23 01:09:12.099911 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.099916 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-23 01:09:12.099921 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-23 01:09:12.099926 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-23 01:09:12.099931 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-23 01:09:12.099937 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-23 01:09:12.099942 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-23 01:09:12.099947 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.099953 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-23 01:09:12.099957 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-23 01:09:12.099963 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-23 01:09:12.099968 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-23 01:09:12.099976 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-23 01:09:12.099982 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-23 01:09:12.099987 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.099993 | orchestrator | 2026-03-23 01:09:12.099998 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-23 01:09:12.100004 | orchestrator | 2026-03-23 01:09:12.100009 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-23 01:09:12.100015 | orchestrator | Monday 23 March 2026 01:09:06 +0000 (0:00:01.204) 0:08:41.609 ********** 2026-03-23 01:09:12.100020 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-23 01:09:12.100025 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-23 01:09:12.100030 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.100039 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-23 01:09:12.100044 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-23 01:09:12.100050 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.100054 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-23 01:09:12.100059 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-23 01:09:12.100064 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.100068 | orchestrator | 2026-03-23 01:09:12.100073 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-23 01:09:12.100078 | orchestrator | 2026-03-23 01:09:12.100083 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-23 01:09:12.100089 | orchestrator | Monday 23 March 2026 01:09:07 +0000 (0:00:00.666) 0:08:42.275 ********** 2026-03-23 01:09:12.100094 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.100099 | orchestrator | 2026-03-23 01:09:12.100104 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-23 01:09:12.100110 | orchestrator | 2026-03-23 01:09:12.100116 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-23 01:09:12.100122 | orchestrator | Monday 23 March 2026 01:09:08 +0000 (0:00:00.682) 0:08:42.957 ********** 2026-03-23 01:09:12.100127 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:09:12.100133 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:09:12.100138 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:09:12.100144 | orchestrator | 2026-03-23 01:09:12.100149 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:09:12.100155 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:09:12.100161 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-03-23 01:09:12.100166 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-23 01:09:12.100175 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-23 01:09:12.100180 | orchestrator | testbed-node-3 : ok=41  changed=28  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-23 01:09:12.100186 | orchestrator | testbed-node-4 : ok=45  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-23 01:09:12.100191 | orchestrator | testbed-node-5 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-23 01:09:12.100196 | orchestrator | 2026-03-23 01:09:12.100201 | orchestrator | 2026-03-23 01:09:12.100207 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:09:12.100212 | orchestrator | Monday 23 March 2026 01:09:08 +0000 (0:00:00.394) 0:08:43.352 ********** 2026-03-23 01:09:12.100217 | orchestrator | =============================================================================== 2026-03-23 01:09:12.100222 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.57s 2026-03-23 01:09:12.100227 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.44s 2026-03-23 01:09:12.100233 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.52s 2026-03-23 01:09:12.100238 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.94s 2026-03-23 01:09:12.100243 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.90s 2026-03-23 01:09:12.100249 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.59s 2026-03-23 01:09:12.100254 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.38s 2026-03-23 01:09:12.100259 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.21s 2026-03-23 01:09:12.100264 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 15.24s 2026-03-23 01:09:12.100269 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.88s 2026-03-23 01:09:12.100274 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.21s 2026-03-23 01:09:12.100280 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.78s 2026-03-23 01:09:12.100285 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.78s 2026-03-23 01:09:12.100290 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.17s 2026-03-23 01:09:12.100298 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.30s 2026-03-23 01:09:12.100303 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.23s 2026-03-23 01:09:12.100308 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.14s 2026-03-23 01:09:12.100313 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.37s 2026-03-23 01:09:12.100318 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.96s 2026-03-23 01:09:12.100323 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.93s 2026-03-23 01:09:12.100328 | orchestrator | 2026-03-23 01:09:12 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:12.100336 | orchestrator | 2026-03-23 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:15.137037 | orchestrator | 2026-03-23 01:09:15 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:15.137095 | orchestrator | 2026-03-23 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:18.176629 | orchestrator | 2026-03-23 01:09:18 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:18.176689 | orchestrator | 2026-03-23 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:21.219765 | orchestrator | 2026-03-23 01:09:21 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:21.219822 | orchestrator | 2026-03-23 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:24.261758 | orchestrator | 2026-03-23 01:09:24 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:24.261816 | orchestrator | 2026-03-23 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:27.296574 | orchestrator | 2026-03-23 01:09:27 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:27.296697 | orchestrator | 2026-03-23 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:30.343657 | orchestrator | 2026-03-23 01:09:30 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:30.343718 | orchestrator | 2026-03-23 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:33.388209 | orchestrator | 2026-03-23 01:09:33 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:33.388269 | orchestrator | 2026-03-23 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:36.433390 | orchestrator | 2026-03-23 01:09:36 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:36.433440 | orchestrator | 2026-03-23 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:39.473630 | orchestrator | 2026-03-23 01:09:39 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:39.474093 | orchestrator | 2026-03-23 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:42.521221 | orchestrator | 2026-03-23 01:09:42 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:42.521322 | orchestrator | 2026-03-23 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:45.561405 | orchestrator | 2026-03-23 01:09:45 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:45.561460 | orchestrator | 2026-03-23 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:48.600746 | orchestrator | 2026-03-23 01:09:48 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:48.600881 | orchestrator | 2026-03-23 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:51.629156 | orchestrator | 2026-03-23 01:09:51 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:51.629213 | orchestrator | 2026-03-23 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:54.680384 | orchestrator | 2026-03-23 01:09:54 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:54.680430 | orchestrator | 2026-03-23 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:09:57.720215 | orchestrator | 2026-03-23 01:09:57 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:09:57.720272 | orchestrator | 2026-03-23 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:00.758607 | orchestrator | 2026-03-23 01:10:00 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:00.758681 | orchestrator | 2026-03-23 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:03.800214 | orchestrator | 2026-03-23 01:10:03 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:03.800360 | orchestrator | 2026-03-23 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:06.843906 | orchestrator | 2026-03-23 01:10:06 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:06.843979 | orchestrator | 2026-03-23 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:09.887032 | orchestrator | 2026-03-23 01:10:09 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:09.887094 | orchestrator | 2026-03-23 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:12.923151 | orchestrator | 2026-03-23 01:10:12 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:12.923220 | orchestrator | 2026-03-23 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:15.959266 | orchestrator | 2026-03-23 01:10:15 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:15.959313 | orchestrator | 2026-03-23 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:18.998057 | orchestrator | 2026-03-23 01:10:18 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:18.998109 | orchestrator | 2026-03-23 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:22.050929 | orchestrator | 2026-03-23 01:10:22 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:22.050983 | orchestrator | 2026-03-23 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:25.091890 | orchestrator | 2026-03-23 01:10:25 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:25.091956 | orchestrator | 2026-03-23 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:28.131474 | orchestrator | 2026-03-23 01:10:28 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:28.131538 | orchestrator | 2026-03-23 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:31.184344 | orchestrator | 2026-03-23 01:10:31 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:31.184405 | orchestrator | 2026-03-23 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:34.229779 | orchestrator | 2026-03-23 01:10:34 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:34.229838 | orchestrator | 2026-03-23 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:37.275527 | orchestrator | 2026-03-23 01:10:37 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:37.275586 | orchestrator | 2026-03-23 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:40.318138 | orchestrator | 2026-03-23 01:10:40 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:40.318200 | orchestrator | 2026-03-23 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:43.359846 | orchestrator | 2026-03-23 01:10:43 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:43.359905 | orchestrator | 2026-03-23 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:46.410528 | orchestrator | 2026-03-23 01:10:46 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:46.410589 | orchestrator | 2026-03-23 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:49.455843 | orchestrator | 2026-03-23 01:10:49 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:49.455924 | orchestrator | 2026-03-23 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:52.497838 | orchestrator | 2026-03-23 01:10:52 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:52.497892 | orchestrator | 2026-03-23 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:55.544280 | orchestrator | 2026-03-23 01:10:55 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:55.544352 | orchestrator | 2026-03-23 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:10:58.589246 | orchestrator | 2026-03-23 01:10:58 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:10:58.589334 | orchestrator | 2026-03-23 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:11:01.631294 | orchestrator | 2026-03-23 01:11:01 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:11:01.631381 | orchestrator | 2026-03-23 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:11:04.670201 | orchestrator | 2026-03-23 01:11:04 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:11:04.670264 | orchestrator | 2026-03-23 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:11:07.718638 | orchestrator | 2026-03-23 01:11:07 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:11:07.718705 | orchestrator | 2026-03-23 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:11:10.764250 | orchestrator | 2026-03-23 01:11:10 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:11:10.764310 | orchestrator | 2026-03-23 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:11:13.811919 | orchestrator | 2026-03-23 01:11:13 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:11:13.812946 | orchestrator | 2026-03-23 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:11:16.854916 | orchestrator | 2026-03-23 01:11:16 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:11:16.854975 | orchestrator | 2026-03-23 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:11:19.898105 | orchestrator | 2026-03-23 01:11:19 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:11:19.898173 | orchestrator | 2026-03-23 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:11:22.936932 | orchestrator | 2026-03-23 01:11:22 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:11:22.936984 | orchestrator | 2026-03-23 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:11:25.980550 | orchestrator | 2026-03-23 01:11:25 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state STARTED 2026-03-23 01:11:25.980615 | orchestrator | 2026-03-23 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-03-23 01:11:29.036045 | orchestrator | 2026-03-23 01:11:29.036098 | orchestrator | 2026-03-23 01:11:29.036106 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:11:29.036112 | orchestrator | 2026-03-23 01:11:29.036117 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:11:29.036124 | orchestrator | Monday 23 March 2026 01:07:04 +0000 (0:00:00.331) 0:00:00.331 ********** 2026-03-23 01:11:29.036133 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:11:29.036170 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:11:29.036179 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:11:29.036206 | orchestrator | 2026-03-23 01:11:29.036215 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:11:29.036224 | orchestrator | Monday 23 March 2026 01:07:05 +0000 (0:00:00.257) 0:00:00.589 ********** 2026-03-23 01:11:29.036232 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-23 01:11:29.036248 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-23 01:11:29.036263 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-23 01:11:29.036268 | orchestrator | 2026-03-23 01:11:29.036273 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-23 01:11:29.036278 | orchestrator | 2026-03-23 01:11:29.036283 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-23 01:11:29.036289 | orchestrator | Monday 23 March 2026 01:07:05 +0000 (0:00:00.265) 0:00:00.855 ********** 2026-03-23 01:11:29.036295 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:11:29.036300 | orchestrator | 2026-03-23 01:11:29.036305 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-23 01:11:29.036311 | orchestrator | Monday 23 March 2026 01:07:05 +0000 (0:00:00.597) 0:00:01.453 ********** 2026-03-23 01:11:29.036316 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-23 01:11:29.036321 | orchestrator | 2026-03-23 01:11:29.036326 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-23 01:11:29.036331 | orchestrator | Monday 23 March 2026 01:07:10 +0000 (0:00:04.602) 0:00:06.056 ********** 2026-03-23 01:11:29.036336 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-23 01:11:29.036342 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-23 01:11:29.036347 | orchestrator | 2026-03-23 01:11:29.036352 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-23 01:11:29.036357 | orchestrator | Monday 23 March 2026 01:07:18 +0000 (0:00:07.770) 0:00:13.826 ********** 2026-03-23 01:11:29.036362 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-23 01:11:29.036367 | orchestrator | 2026-03-23 01:11:29.036373 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-23 01:11:29.036382 | orchestrator | Monday 23 March 2026 01:07:21 +0000 (0:00:03.660) 0:00:17.486 ********** 2026-03-23 01:11:29.036452 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-23 01:11:29.036468 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-23 01:11:29.036477 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-23 01:11:29.036486 | orchestrator | 2026-03-23 01:11:29.036494 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-23 01:11:29.036503 | orchestrator | Monday 23 March 2026 01:07:30 +0000 (0:00:08.254) 0:00:25.740 ********** 2026-03-23 01:11:29.036512 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-23 01:11:29.036521 | orchestrator | 2026-03-23 01:11:29.036530 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-23 01:11:29.036537 | orchestrator | Monday 23 March 2026 01:07:33 +0000 (0:00:03.315) 0:00:29.056 ********** 2026-03-23 01:11:29.036543 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-23 01:11:29.036548 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-23 01:11:29.036553 | orchestrator | 2026-03-23 01:11:29.036558 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-23 01:11:29.036563 | orchestrator | Monday 23 March 2026 01:07:40 +0000 (0:00:07.241) 0:00:36.297 ********** 2026-03-23 01:11:29.036568 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-23 01:11:29.036573 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-23 01:11:29.036578 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-23 01:11:29.036590 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-23 01:11:29.036595 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-23 01:11:29.036600 | orchestrator | 2026-03-23 01:11:29.036609 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-23 01:11:29.036617 | orchestrator | Monday 23 March 2026 01:07:55 +0000 (0:00:15.100) 0:00:51.398 ********** 2026-03-23 01:11:29.036689 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:11:29.036700 | orchestrator | 2026-03-23 01:11:29.036710 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-23 01:11:29.036795 | orchestrator | Monday 23 March 2026 01:07:56 +0000 (0:00:00.800) 0:00:52.198 ********** 2026-03-23 01:11:29.036803 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.036809 | orchestrator | 2026-03-23 01:11:29.036815 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-23 01:11:29.036821 | orchestrator | Monday 23 March 2026 01:08:01 +0000 (0:00:04.806) 0:00:57.005 ********** 2026-03-23 01:11:29.036827 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.036833 | orchestrator | 2026-03-23 01:11:29.036839 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-23 01:11:29.036858 | orchestrator | Monday 23 March 2026 01:08:06 +0000 (0:00:05.083) 0:01:02.088 ********** 2026-03-23 01:11:29.036865 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:11:29.036871 | orchestrator | 2026-03-23 01:11:29.036877 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-23 01:11:29.036882 | orchestrator | Monday 23 March 2026 01:08:09 +0000 (0:00:03.176) 0:01:05.265 ********** 2026-03-23 01:11:29.036888 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-23 01:11:29.036894 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-23 01:11:29.036900 | orchestrator | 2026-03-23 01:11:29.036906 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-23 01:11:29.036912 | orchestrator | Monday 23 March 2026 01:08:19 +0000 (0:00:10.237) 0:01:15.503 ********** 2026-03-23 01:11:29.036918 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-23 01:11:29.036925 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-23 01:11:29.036932 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-23 01:11:29.036938 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-23 01:11:29.036944 | orchestrator | 2026-03-23 01:11:29.036951 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-23 01:11:29.036960 | orchestrator | Monday 23 March 2026 01:08:35 +0000 (0:00:15.460) 0:01:30.964 ********** 2026-03-23 01:11:29.036966 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.036971 | orchestrator | 2026-03-23 01:11:29.036976 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-23 01:11:29.036988 | orchestrator | Monday 23 March 2026 01:08:41 +0000 (0:00:05.833) 0:01:36.797 ********** 2026-03-23 01:11:29.036993 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.036998 | orchestrator | 2026-03-23 01:11:29.037003 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-23 01:11:29.037008 | orchestrator | Monday 23 March 2026 01:08:46 +0000 (0:00:05.008) 0:01:41.806 ********** 2026-03-23 01:11:29.037013 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:11:29.037018 | orchestrator | 2026-03-23 01:11:29.037024 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-23 01:11:29.037029 | orchestrator | Monday 23 March 2026 01:08:46 +0000 (0:00:00.245) 0:01:42.051 ********** 2026-03-23 01:11:29.037039 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:11:29.037044 | orchestrator | 2026-03-23 01:11:29.037049 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-23 01:11:29.037054 | orchestrator | Monday 23 March 2026 01:08:50 +0000 (0:00:03.688) 0:01:45.740 ********** 2026-03-23 01:11:29.037063 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:11:29.037068 | orchestrator | 2026-03-23 01:11:29.037073 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-23 01:11:29.037078 | orchestrator | Monday 23 March 2026 01:08:50 +0000 (0:00:00.752) 0:01:46.493 ********** 2026-03-23 01:11:29.037084 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.037089 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:11:29.037094 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:11:29.037099 | orchestrator | 2026-03-23 01:11:29.037104 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-23 01:11:29.037109 | orchestrator | Monday 23 March 2026 01:08:56 +0000 (0:00:05.242) 0:01:51.735 ********** 2026-03-23 01:11:29.037114 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.037119 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:11:29.037124 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:11:29.037129 | orchestrator | 2026-03-23 01:11:29.037134 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-23 01:11:29.037140 | orchestrator | Monday 23 March 2026 01:09:00 +0000 (0:00:04.703) 0:01:56.439 ********** 2026-03-23 01:11:29.037145 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.037150 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:11:29.037155 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:11:29.037160 | orchestrator | 2026-03-23 01:11:29.037165 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-23 01:11:29.037172 | orchestrator | Monday 23 March 2026 01:09:01 +0000 (0:00:00.739) 0:01:57.178 ********** 2026-03-23 01:11:29.037180 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:11:29.037191 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:11:29.037203 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:11:29.037211 | orchestrator | 2026-03-23 01:11:29.037219 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-23 01:11:29.037227 | orchestrator | Monday 23 March 2026 01:09:03 +0000 (0:00:01.537) 0:01:58.715 ********** 2026-03-23 01:11:29.037235 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.037243 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:11:29.037250 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:11:29.037259 | orchestrator | 2026-03-23 01:11:29.037267 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-23 01:11:29.037276 | orchestrator | Monday 23 March 2026 01:09:04 +0000 (0:00:01.119) 0:01:59.834 ********** 2026-03-23 01:11:29.037284 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.037293 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:11:29.037301 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:11:29.037309 | orchestrator | 2026-03-23 01:11:29.037314 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-23 01:11:29.037319 | orchestrator | Monday 23 March 2026 01:09:05 +0000 (0:00:01.073) 0:02:00.907 ********** 2026-03-23 01:11:29.037324 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:11:29.037329 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.037334 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:11:29.037340 | orchestrator | 2026-03-23 01:11:29.037350 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-23 01:11:29.037356 | orchestrator | Monday 23 March 2026 01:09:07 +0000 (0:00:01.974) 0:02:02.882 ********** 2026-03-23 01:11:29.037361 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.037366 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:11:29.037371 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:11:29.037381 | orchestrator | 2026-03-23 01:11:29.037386 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-23 01:11:29.037391 | orchestrator | Monday 23 March 2026 01:09:08 +0000 (0:00:01.368) 0:02:04.251 ********** 2026-03-23 01:11:29.037396 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:11:29.037402 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:11:29.037407 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:11:29.037412 | orchestrator | 2026-03-23 01:11:29.037417 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-23 01:11:29.037422 | orchestrator | Monday 23 March 2026 01:09:09 +0000 (0:00:00.563) 0:02:04.815 ********** 2026-03-23 01:11:29.037427 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:11:29.037432 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:11:29.037437 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:11:29.037442 | orchestrator | 2026-03-23 01:11:29.037447 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-23 01:11:29.037454 | orchestrator | Monday 23 March 2026 01:09:12 +0000 (0:00:03.166) 0:02:07.981 ********** 2026-03-23 01:11:29.037462 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:11:29.037471 | orchestrator | 2026-03-23 01:11:29.037479 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-23 01:11:29.037487 | orchestrator | Monday 23 March 2026 01:09:13 +0000 (0:00:00.683) 0:02:08.664 ********** 2026-03-23 01:11:29.037495 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:11:29.037504 | orchestrator | 2026-03-23 01:11:29.037513 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-23 01:11:29.037522 | orchestrator | Monday 23 March 2026 01:09:16 +0000 (0:00:03.849) 0:02:12.514 ********** 2026-03-23 01:11:29.037531 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:11:29.037539 | orchestrator | 2026-03-23 01:11:29.037548 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-23 01:11:29.037558 | orchestrator | Monday 23 March 2026 01:09:19 +0000 (0:00:02.847) 0:02:15.361 ********** 2026-03-23 01:11:29.037571 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-23 01:11:29.037580 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-23 01:11:29.037588 | orchestrator | 2026-03-23 01:11:29.037596 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-23 01:11:29.037604 | orchestrator | Monday 23 March 2026 01:09:25 +0000 (0:00:05.616) 0:02:20.978 ********** 2026-03-23 01:11:29.037611 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:11:29.037620 | orchestrator | 2026-03-23 01:11:29.037651 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-23 01:11:29.037660 | orchestrator | Monday 23 March 2026 01:09:29 +0000 (0:00:03.566) 0:02:24.544 ********** 2026-03-23 01:11:29.037674 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:11:29.037684 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:11:29.037693 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:11:29.037702 | orchestrator | 2026-03-23 01:11:29.037711 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-23 01:11:29.037719 | orchestrator | Monday 23 March 2026 01:09:29 +0000 (0:00:00.299) 0:02:24.844 ********** 2026-03-23 01:11:29.037731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.037757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.037764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.037770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.037776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.037784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.037790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.037800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.037810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.037816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.037821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.037829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.037835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.037844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.037849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.037854 | orchestrator | 2026-03-23 01:11:29.037860 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-23 01:11:29.037865 | orchestrator | Monday 23 March 2026 01:09:31 +0000 (0:00:02.586) 0:02:27.431 ********** 2026-03-23 01:11:29.037870 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:11:29.037875 | orchestrator | 2026-03-23 01:11:29.037883 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-23 01:11:29.037889 | orchestrator | Monday 23 March 2026 01:09:32 +0000 (0:00:00.134) 0:02:27.566 ********** 2026-03-23 01:11:29.037894 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:11:29.037901 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:11:29.037906 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:11:29.037911 | orchestrator | 2026-03-23 01:11:29.037917 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-23 01:11:29.037922 | orchestrator | Monday 23 March 2026 01:09:32 +0000 (0:00:00.272) 0:02:27.838 ********** 2026-03-23 01:11:29.037927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-23 01:11:29.037933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 01:11:29.037941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.037951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.037956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:11:29.037962 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:11:29.037972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-23 01:11:29.037978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 01:11:29.037983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.037994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.038002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:11:29.038008 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:11:29.038050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-23 01:11:29.038063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 01:11:29.038068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.038074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.038079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:11:29.038091 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:11:29.038096 | orchestrator | 2026-03-23 01:11:29.038101 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-23 01:11:29.038106 | orchestrator | Monday 23 March 2026 01:09:33 +0000 (0:00:00.705) 0:02:28.544 ********** 2026-03-23 01:11:29.038112 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:11:29.038117 | orchestrator | 2026-03-23 01:11:29.038122 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-23 01:11:29.038127 | orchestrator | Monday 23 March 2026 01:09:33 +0000 (0:00:00.682) 0:02:29.227 ********** 2026-03-23 01:11:29.038133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.038141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'hapr2026-03-23 01:11:29 | INFO  | Task 3f6cbce9-9b3c-49a1-92f3-5c71a727af12 is in state SUCCESS 2026-03-23 01:11:29.038520 | orchestrator | oxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.038567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.038577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.038604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.038618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.038676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.038689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.038713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.038721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.038733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.038743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.038750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.038756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.038766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.038773 | orchestrator | 2026-03-23 01:11:29.038780 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-23 01:11:29.038787 | orchestrator | Monday 23 March 2026 01:09:38 +0000 (0:00:04.660) 0:02:33.887 ********** 2026-03-23 01:11:29.038793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-23 01:11:29.038803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 01:11:29.038812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.038819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.038825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:11:29.038830 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:11:29.038841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-23 01:11:29.038847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 01:11:29.038857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.038866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.038872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:11:29.038878 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:11:29.038884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-23 01:11:29.038890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 01:11:29.038899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.038909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.038917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:11:29.038928 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:11:29.038943 | orchestrator | 2026-03-23 01:11:29.038954 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-23 01:11:29.038968 | orchestrator | Monday 23 March 2026 01:09:38 +0000 (0:00:00.634) 0:02:34.522 ********** 2026-03-23 01:11:29.038979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-23 01:11:29.038989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 01:11:29.038999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.039016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.039033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:11:29.039040 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:11:29.039049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-23 01:11:29.039056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 01:11:29.039062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.039068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.039079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:11:29.039089 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:11:29.039095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-23 01:11:29.039101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-23 01:11:29.039110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.039117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-23 01:11:29.039124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-23 01:11:29.039130 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:11:29.039136 | orchestrator | 2026-03-23 01:11:29.039143 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-23 01:11:29.039149 | orchestrator | Monday 23 March 2026 01:09:39 +0000 (0:00:00.976) 0:02:35.498 ********** 2026-03-23 01:11:29.039167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.039175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.039196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.039217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.039228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.039235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.039250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039322 | orchestrator | 2026-03-23 01:11:29.039329 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-23 01:11:29.039336 | orchestrator | Monday 23 March 2026 01:09:44 +0000 (0:00:04.645) 0:02:40.144 ********** 2026-03-23 01:11:29.039343 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-23 01:11:29.039350 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-23 01:11:29.039356 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-23 01:11:29.039362 | orchestrator | 2026-03-23 01:11:29.039369 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-23 01:11:29.039376 | orchestrator | Monday 23 March 2026 01:09:46 +0000 (0:00:01.446) 0:02:41.591 ********** 2026-03-23 01:11:29.039385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.039392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.039408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.039416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.039423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.039430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.039437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.039575 | orchestrator | 2026-03-23 01:11:29.039584 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-23 01:11:29.039593 | orchestrator | Monday 23 March 2026 01:10:01 +0000 (0:00:15.003) 0:02:56.594 ********** 2026-03-23 01:11:29.039602 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.039611 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:11:29.039620 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:11:29.039644 | orchestrator | 2026-03-23 01:11:29.039653 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-23 01:11:29.039662 | orchestrator | Monday 23 March 2026 01:10:02 +0000 (0:00:01.651) 0:02:58.245 ********** 2026-03-23 01:11:29.039671 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-23 01:11:29.039680 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-23 01:11:29.039695 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-23 01:11:29.039705 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-23 01:11:29.039715 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-23 01:11:29.039725 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-23 01:11:29.039735 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-23 01:11:29.039744 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-23 01:11:29.039754 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-23 01:11:29.039763 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-23 01:11:29.039773 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-23 01:11:29.039781 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-23 01:11:29.039791 | orchestrator | 2026-03-23 01:11:29.039801 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-23 01:11:29.039811 | orchestrator | Monday 23 March 2026 01:10:07 +0000 (0:00:04.775) 0:03:03.021 ********** 2026-03-23 01:11:29.039821 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-23 01:11:29.039827 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-23 01:11:29.039833 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-23 01:11:29.039839 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-23 01:11:29.039845 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-23 01:11:29.039850 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-23 01:11:29.039855 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-23 01:11:29.039861 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-23 01:11:29.039866 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-23 01:11:29.039872 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-23 01:11:29.039877 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-23 01:11:29.039883 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-23 01:11:29.039895 | orchestrator | 2026-03-23 01:11:29.039901 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-23 01:11:29.039907 | orchestrator | Monday 23 March 2026 01:10:12 +0000 (0:00:04.796) 0:03:07.817 ********** 2026-03-23 01:11:29.039912 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-23 01:11:29.039918 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-23 01:11:29.039928 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-23 01:11:29.039934 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-23 01:11:29.039940 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-23 01:11:29.039945 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-23 01:11:29.039951 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-23 01:11:29.039957 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-23 01:11:29.039962 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-23 01:11:29.039968 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-23 01:11:29.039973 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-23 01:11:29.039979 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-23 01:11:29.039985 | orchestrator | 2026-03-23 01:11:29.039994 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-23 01:11:29.040001 | orchestrator | Monday 23 March 2026 01:10:17 +0000 (0:00:04.722) 0:03:12.540 ********** 2026-03-23 01:11:29.040007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.040020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.040026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-23 01:11:29.040036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.040045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.040051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-23 01:11:29.040057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.040065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.040071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.040077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.040089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.040098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-23 01:11:29.040104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.040110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.040120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-23 01:11:29.040126 | orchestrator | 2026-03-23 01:11:29.040131 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-23 01:11:29.040137 | orchestrator | Monday 23 March 2026 01:10:20 +0000 (0:00:03.448) 0:03:15.989 ********** 2026-03-23 01:11:29.040143 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:11:29.040149 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:11:29.040154 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:11:29.040159 | orchestrator | 2026-03-23 01:11:29.040168 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-23 01:11:29.040174 | orchestrator | Monday 23 March 2026 01:10:20 +0000 (0:00:00.383) 0:03:16.372 ********** 2026-03-23 01:11:29.040179 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.040184 | orchestrator | 2026-03-23 01:11:29.040190 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-23 01:11:29.040196 | orchestrator | Monday 23 March 2026 01:10:22 +0000 (0:00:02.106) 0:03:18.479 ********** 2026-03-23 01:11:29.040201 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.040206 | orchestrator | 2026-03-23 01:11:29.040212 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-23 01:11:29.040217 | orchestrator | Monday 23 March 2026 01:10:24 +0000 (0:00:02.013) 0:03:20.492 ********** 2026-03-23 01:11:29.040223 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.040228 | orchestrator | 2026-03-23 01:11:29.040234 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-23 01:11:29.040239 | orchestrator | Monday 23 March 2026 01:10:27 +0000 (0:00:02.110) 0:03:22.602 ********** 2026-03-23 01:11:29.040245 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.040250 | orchestrator | 2026-03-23 01:11:29.040256 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-23 01:11:29.040261 | orchestrator | Monday 23 March 2026 01:10:29 +0000 (0:00:02.005) 0:03:24.608 ********** 2026-03-23 01:11:29.040267 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.040272 | orchestrator | 2026-03-23 01:11:29.040278 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-23 01:11:29.040283 | orchestrator | Monday 23 March 2026 01:10:47 +0000 (0:00:18.426) 0:03:43.034 ********** 2026-03-23 01:11:29.040288 | orchestrator | 2026-03-23 01:11:29.040294 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-23 01:11:29.040299 | orchestrator | Monday 23 March 2026 01:10:47 +0000 (0:00:00.083) 0:03:43.118 ********** 2026-03-23 01:11:29.040305 | orchestrator | 2026-03-23 01:11:29.040310 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-23 01:11:29.040315 | orchestrator | Monday 23 March 2026 01:10:47 +0000 (0:00:00.065) 0:03:43.184 ********** 2026-03-23 01:11:29.040321 | orchestrator | 2026-03-23 01:11:29.040329 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-23 01:11:29.040334 | orchestrator | Monday 23 March 2026 01:10:47 +0000 (0:00:00.074) 0:03:43.258 ********** 2026-03-23 01:11:29.040340 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.040345 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:11:29.040351 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:11:29.040356 | orchestrator | 2026-03-23 01:11:29.040362 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-23 01:11:29.040367 | orchestrator | Monday 23 March 2026 01:10:56 +0000 (0:00:08.724) 0:03:51.983 ********** 2026-03-23 01:11:29.040372 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:11:29.040378 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:11:29.040383 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.040389 | orchestrator | 2026-03-23 01:11:29.040394 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-23 01:11:29.040400 | orchestrator | Monday 23 March 2026 01:11:04 +0000 (0:00:07.971) 0:03:59.955 ********** 2026-03-23 01:11:29.040405 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:11:29.040411 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:11:29.040416 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.040422 | orchestrator | 2026-03-23 01:11:29.040427 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-23 01:11:29.040433 | orchestrator | Monday 23 March 2026 01:11:12 +0000 (0:00:08.261) 0:04:08.217 ********** 2026-03-23 01:11:29.040438 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.040443 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:11:29.040449 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:11:29.040458 | orchestrator | 2026-03-23 01:11:29.040464 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-23 01:11:29.040470 | orchestrator | Monday 23 March 2026 01:11:17 +0000 (0:00:05.239) 0:04:13.456 ********** 2026-03-23 01:11:29.040475 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:11:29.040481 | orchestrator | changed: [testbed-node-1] 2026-03-23 01:11:29.040486 | orchestrator | changed: [testbed-node-2] 2026-03-23 01:11:29.040492 | orchestrator | 2026-03-23 01:11:29.040497 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:11:29.040503 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-23 01:11:29.040509 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-23 01:11:29.040515 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-23 01:11:29.040521 | orchestrator | 2026-03-23 01:11:29.040526 | orchestrator | 2026-03-23 01:11:29.040532 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:11:29.040537 | orchestrator | Monday 23 March 2026 01:11:27 +0000 (0:00:09.967) 0:04:23.424 ********** 2026-03-23 01:11:29.040592 | orchestrator | =============================================================================== 2026-03-23 01:11:29.040599 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 18.43s 2026-03-23 01:11:29.040605 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.46s 2026-03-23 01:11:29.040610 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.10s 2026-03-23 01:11:29.040616 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.00s 2026-03-23 01:11:29.040621 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.24s 2026-03-23 01:11:29.040666 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 9.97s 2026-03-23 01:11:29.040673 | orchestrator | octavia : Restart octavia-api container --------------------------------- 8.72s 2026-03-23 01:11:29.040678 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.26s 2026-03-23 01:11:29.040684 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.25s 2026-03-23 01:11:29.040689 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.97s 2026-03-23 01:11:29.040695 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.77s 2026-03-23 01:11:29.040700 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.24s 2026-03-23 01:11:29.040706 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.83s 2026-03-23 01:11:29.040711 | orchestrator | octavia : Get security groups for octavia ------------------------------- 5.62s 2026-03-23 01:11:29.040717 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.24s 2026-03-23 01:11:29.040722 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.24s 2026-03-23 01:11:29.040728 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 5.08s 2026-03-23 01:11:29.040733 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.01s 2026-03-23 01:11:29.040739 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 4.81s 2026-03-23 01:11:29.040744 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 4.80s 2026-03-23 01:11:29.040750 | orchestrator | 2026-03-23 01:11:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:11:32.078996 | orchestrator | 2026-03-23 01:11:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:11:35.118434 | orchestrator | 2026-03-23 01:11:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:11:38.160858 | orchestrator | 2026-03-23 01:11:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:11:41.203967 | orchestrator | 2026-03-23 01:11:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:11:44.245806 | orchestrator | 2026-03-23 01:11:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:11:47.288774 | orchestrator | 2026-03-23 01:11:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:11:50.328610 | orchestrator | 2026-03-23 01:11:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:11:53.372560 | orchestrator | 2026-03-23 01:11:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:11:56.417016 | orchestrator | 2026-03-23 01:11:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:11:59.462617 | orchestrator | 2026-03-23 01:11:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:12:02.504322 | orchestrator | 2026-03-23 01:12:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:12:05.548686 | orchestrator | 2026-03-23 01:12:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:12:08.593964 | orchestrator | 2026-03-23 01:12:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:12:11.635014 | orchestrator | 2026-03-23 01:12:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:12:14.677470 | orchestrator | 2026-03-23 01:12:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:12:17.717782 | orchestrator | 2026-03-23 01:12:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:12:20.764922 | orchestrator | 2026-03-23 01:12:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:12:23.805819 | orchestrator | 2026-03-23 01:12:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:12:26.849750 | orchestrator | 2026-03-23 01:12:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-23 01:12:29.897633 | orchestrator | 2026-03-23 01:12:30.078802 | orchestrator | 2026-03-23 01:12:30.082355 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Mar 23 01:12:30 UTC 2026 2026-03-23 01:12:30.082446 | orchestrator | 2026-03-23 01:12:30.408867 | orchestrator | ok: Runtime: 0:31:43.697482 2026-03-23 01:12:30.662028 | 2026-03-23 01:12:30.662188 | TASK [Bootstrap services] 2026-03-23 01:12:31.454154 | orchestrator | 2026-03-23 01:12:31.454255 | orchestrator | # BOOTSTRAP 2026-03-23 01:12:31.454266 | orchestrator | 2026-03-23 01:12:31.454274 | orchestrator | + set -e 2026-03-23 01:12:31.454282 | orchestrator | + echo 2026-03-23 01:12:31.454290 | orchestrator | + echo '# BOOTSTRAP' 2026-03-23 01:12:31.454300 | orchestrator | + echo 2026-03-23 01:12:31.454325 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-23 01:12:31.463590 | orchestrator | + set -e 2026-03-23 01:12:31.463640 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-23 01:12:36.260126 | orchestrator | 2026-03-23 01:12:36 | INFO  | It takes a moment until task 8be24233-224c-4a0c-96b6-45893b216819 (flavor-manager) has been started and output is visible here. 2026-03-23 01:12:44.833000 | orchestrator | 2026-03-23 01:12:40 | INFO  | Flavor SCS-1L-1 created 2026-03-23 01:12:44.833068 | orchestrator | 2026-03-23 01:12:40 | INFO  | Flavor SCS-1L-1-5 created 2026-03-23 01:12:44.833078 | orchestrator | 2026-03-23 01:12:41 | INFO  | Flavor SCS-1V-2 created 2026-03-23 01:12:44.833084 | orchestrator | 2026-03-23 01:12:41 | INFO  | Flavor SCS-1V-2-5 created 2026-03-23 01:12:44.833090 | orchestrator | 2026-03-23 01:12:41 | INFO  | Flavor SCS-1V-4 created 2026-03-23 01:12:44.833095 | orchestrator | 2026-03-23 01:12:41 | INFO  | Flavor SCS-1V-4-10 created 2026-03-23 01:12:44.833098 | orchestrator | 2026-03-23 01:12:42 | INFO  | Flavor SCS-1V-8 created 2026-03-23 01:12:44.833102 | orchestrator | 2026-03-23 01:12:42 | INFO  | Flavor SCS-1V-8-20 created 2026-03-23 01:12:44.833109 | orchestrator | 2026-03-23 01:12:42 | INFO  | Flavor SCS-2V-4 created 2026-03-23 01:12:44.833113 | orchestrator | 2026-03-23 01:12:42 | INFO  | Flavor SCS-2V-4-10 created 2026-03-23 01:12:44.833116 | orchestrator | 2026-03-23 01:12:42 | INFO  | Flavor SCS-2V-8 created 2026-03-23 01:12:44.833119 | orchestrator | 2026-03-23 01:12:42 | INFO  | Flavor SCS-2V-8-20 created 2026-03-23 01:12:44.833122 | orchestrator | 2026-03-23 01:12:42 | INFO  | Flavor SCS-2V-16 created 2026-03-23 01:12:44.833126 | orchestrator | 2026-03-23 01:12:42 | INFO  | Flavor SCS-2V-16-50 created 2026-03-23 01:12:44.833129 | orchestrator | 2026-03-23 01:12:43 | INFO  | Flavor SCS-4V-8 created 2026-03-23 01:12:44.833132 | orchestrator | 2026-03-23 01:12:43 | INFO  | Flavor SCS-4V-8-20 created 2026-03-23 01:12:44.833135 | orchestrator | 2026-03-23 01:12:43 | INFO  | Flavor SCS-4V-16 created 2026-03-23 01:12:44.833138 | orchestrator | 2026-03-23 01:12:43 | INFO  | Flavor SCS-4V-16-50 created 2026-03-23 01:12:44.833141 | orchestrator | 2026-03-23 01:12:43 | INFO  | Flavor SCS-4V-32 created 2026-03-23 01:12:44.833144 | orchestrator | 2026-03-23 01:12:43 | INFO  | Flavor SCS-4V-32-100 created 2026-03-23 01:12:44.833148 | orchestrator | 2026-03-23 01:12:43 | INFO  | Flavor SCS-8V-16 created 2026-03-23 01:12:44.833151 | orchestrator | 2026-03-23 01:12:43 | INFO  | Flavor SCS-8V-16-50 created 2026-03-23 01:12:44.833154 | orchestrator | 2026-03-23 01:12:43 | INFO  | Flavor SCS-8V-32 created 2026-03-23 01:12:44.833157 | orchestrator | 2026-03-23 01:12:44 | INFO  | Flavor SCS-8V-32-100 created 2026-03-23 01:12:44.833160 | orchestrator | 2026-03-23 01:12:44 | INFO  | Flavor SCS-16V-32 created 2026-03-23 01:12:44.833164 | orchestrator | 2026-03-23 01:12:44 | INFO  | Flavor SCS-16V-32-100 created 2026-03-23 01:12:44.833167 | orchestrator | 2026-03-23 01:12:44 | INFO  | Flavor SCS-2V-4-20s created 2026-03-23 01:12:44.833170 | orchestrator | 2026-03-23 01:12:44 | INFO  | Flavor SCS-4V-8-50s created 2026-03-23 01:12:44.833173 | orchestrator | 2026-03-23 01:12:44 | INFO  | Flavor SCS-4V-16-100s created 2026-03-23 01:12:44.833176 | orchestrator | 2026-03-23 01:12:44 | INFO  | Flavor SCS-8V-32-100s created 2026-03-23 01:12:46.369053 | orchestrator | 2026-03-23 01:12:46 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-23 01:12:56.427600 | orchestrator | 2026-03-23 01:12:56 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-23 01:12:56.498516 | orchestrator | 2026-03-23 01:12:56 | INFO  | Task 3acf309a-a1ca-4254-8f1a-dafd1dc9e4ab (bootstrap-basic) was prepared for execution. 2026-03-23 01:12:56.498569 | orchestrator | 2026-03-23 01:12:56 | INFO  | It takes a moment until task 3acf309a-a1ca-4254-8f1a-dafd1dc9e4ab (bootstrap-basic) has been started and output is visible here. 2026-03-23 01:13:41.862524 | orchestrator | 2026-03-23 01:13:41.862597 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-23 01:13:41.862610 | orchestrator | 2026-03-23 01:13:41.862618 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-23 01:13:41.862627 | orchestrator | Monday 23 March 2026 01:12:59 +0000 (0:00:00.096) 0:00:00.096 ********** 2026-03-23 01:13:41.862636 | orchestrator | ok: [localhost] 2026-03-23 01:13:41.862645 | orchestrator | 2026-03-23 01:13:41.862653 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-23 01:13:41.862662 | orchestrator | Monday 23 March 2026 01:13:01 +0000 (0:00:01.857) 0:00:01.954 ********** 2026-03-23 01:13:41.862672 | orchestrator | ok: [localhost] 2026-03-23 01:13:41.862680 | orchestrator | 2026-03-23 01:13:41.862688 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-23 01:13:41.862697 | orchestrator | Monday 23 March 2026 01:13:10 +0000 (0:00:08.998) 0:00:10.952 ********** 2026-03-23 01:13:41.862705 | orchestrator | changed: [localhost] 2026-03-23 01:13:41.862714 | orchestrator | 2026-03-23 01:13:41.862722 | orchestrator | TASK [Create public network] *************************************************** 2026-03-23 01:13:41.862731 | orchestrator | Monday 23 March 2026 01:13:18 +0000 (0:00:07.832) 0:00:18.784 ********** 2026-03-23 01:13:41.862739 | orchestrator | changed: [localhost] 2026-03-23 01:13:41.862748 | orchestrator | 2026-03-23 01:13:41.862760 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-23 01:13:41.862769 | orchestrator | Monday 23 March 2026 01:13:23 +0000 (0:00:05.465) 0:00:24.250 ********** 2026-03-23 01:13:41.862778 | orchestrator | changed: [localhost] 2026-03-23 01:13:41.862786 | orchestrator | 2026-03-23 01:13:41.862795 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-23 01:13:41.862805 | orchestrator | Monday 23 March 2026 01:13:29 +0000 (0:00:06.161) 0:00:30.411 ********** 2026-03-23 01:13:41.862813 | orchestrator | changed: [localhost] 2026-03-23 01:13:41.862822 | orchestrator | 2026-03-23 01:13:41.862831 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-23 01:13:41.862838 | orchestrator | Monday 23 March 2026 01:13:34 +0000 (0:00:04.518) 0:00:34.930 ********** 2026-03-23 01:13:41.862843 | orchestrator | changed: [localhost] 2026-03-23 01:13:41.862848 | orchestrator | 2026-03-23 01:13:41.862854 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-23 01:13:41.862866 | orchestrator | Monday 23 March 2026 01:13:38 +0000 (0:00:03.769) 0:00:38.700 ********** 2026-03-23 01:13:41.862871 | orchestrator | ok: [localhost] 2026-03-23 01:13:41.862877 | orchestrator | 2026-03-23 01:13:41.862882 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:13:41.862887 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-23 01:13:41.862893 | orchestrator | 2026-03-23 01:13:41.862898 | orchestrator | 2026-03-23 01:13:41.862903 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:13:41.862908 | orchestrator | Monday 23 March 2026 01:13:41 +0000 (0:00:03.581) 0:00:42.282 ********** 2026-03-23 01:13:41.862913 | orchestrator | =============================================================================== 2026-03-23 01:13:41.862918 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.00s 2026-03-23 01:13:41.862937 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.83s 2026-03-23 01:13:41.862942 | orchestrator | Set public network to default ------------------------------------------- 6.16s 2026-03-23 01:13:41.862948 | orchestrator | Create public network --------------------------------------------------- 5.47s 2026-03-23 01:13:41.862953 | orchestrator | Create public subnet ---------------------------------------------------- 4.52s 2026-03-23 01:13:41.862958 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.77s 2026-03-23 01:13:41.862963 | orchestrator | Create manager role ----------------------------------------------------- 3.58s 2026-03-23 01:13:41.862968 | orchestrator | Gathering Facts --------------------------------------------------------- 1.86s 2026-03-23 01:13:43.795426 | orchestrator | 2026-03-23 01:13:43 | INFO  | It takes a moment until task 1e3880cc-721f-4f03-9686-0c4273b5c251 (image-manager) has been started and output is visible here. 2026-03-23 01:14:23.914133 | orchestrator | 2026-03-23 01:13:46 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-23 01:14:23.914206 | orchestrator | 2026-03-23 01:13:46 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-23 01:14:23.914214 | orchestrator | 2026-03-23 01:13:46 | INFO  | Importing image Cirros 0.6.2 2026-03-23 01:14:23.914218 | orchestrator | 2026-03-23 01:13:46 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-23 01:14:23.914223 | orchestrator | 2026-03-23 01:13:49 | INFO  | Waiting for image to leave queued state... 2026-03-23 01:14:23.914231 | orchestrator | 2026-03-23 01:13:51 | INFO  | Waiting for import to complete... 2026-03-23 01:14:23.914237 | orchestrator | 2026-03-23 01:14:01 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-23 01:14:23.914245 | orchestrator | 2026-03-23 01:14:01 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-23 01:14:23.914251 | orchestrator | 2026-03-23 01:14:01 | INFO  | Setting internal_version = 0.6.2 2026-03-23 01:14:23.914259 | orchestrator | 2026-03-23 01:14:01 | INFO  | Setting image_original_user = cirros 2026-03-23 01:14:23.914267 | orchestrator | 2026-03-23 01:14:01 | INFO  | Adding tag os:cirros 2026-03-23 01:14:23.914274 | orchestrator | 2026-03-23 01:14:01 | INFO  | Setting property architecture: x86_64 2026-03-23 01:14:23.914279 | orchestrator | 2026-03-23 01:14:02 | INFO  | Setting property hw_disk_bus: scsi 2026-03-23 01:14:23.914283 | orchestrator | 2026-03-23 01:14:02 | INFO  | Setting property hw_rng_model: virtio 2026-03-23 01:14:23.914287 | orchestrator | 2026-03-23 01:14:02 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-23 01:14:23.914291 | orchestrator | 2026-03-23 01:14:02 | INFO  | Setting property hw_watchdog_action: reset 2026-03-23 01:14:23.914295 | orchestrator | 2026-03-23 01:14:02 | INFO  | Setting property hypervisor_type: qemu 2026-03-23 01:14:23.914304 | orchestrator | 2026-03-23 01:14:03 | INFO  | Setting property os_distro: cirros 2026-03-23 01:14:23.914308 | orchestrator | 2026-03-23 01:14:03 | INFO  | Setting property os_purpose: minimal 2026-03-23 01:14:23.914388 | orchestrator | 2026-03-23 01:14:03 | INFO  | Setting property replace_frequency: never 2026-03-23 01:14:23.914400 | orchestrator | 2026-03-23 01:14:03 | INFO  | Setting property uuid_validity: none 2026-03-23 01:14:23.914404 | orchestrator | 2026-03-23 01:14:03 | INFO  | Setting property provided_until: none 2026-03-23 01:14:23.914407 | orchestrator | 2026-03-23 01:14:03 | INFO  | Setting property image_description: Cirros 2026-03-23 01:14:23.914411 | orchestrator | 2026-03-23 01:14:04 | INFO  | Setting property image_name: Cirros 2026-03-23 01:14:23.914431 | orchestrator | 2026-03-23 01:14:04 | INFO  | Setting property internal_version: 0.6.2 2026-03-23 01:14:23.914435 | orchestrator | 2026-03-23 01:14:04 | INFO  | Setting property image_original_user: cirros 2026-03-23 01:14:23.914438 | orchestrator | 2026-03-23 01:14:04 | INFO  | Setting property os_version: 0.6.2 2026-03-23 01:14:23.914443 | orchestrator | 2026-03-23 01:14:04 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-23 01:14:23.914448 | orchestrator | 2026-03-23 01:14:05 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-23 01:14:23.914458 | orchestrator | 2026-03-23 01:14:05 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-23 01:14:23.914462 | orchestrator | 2026-03-23 01:14:05 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-23 01:14:23.914474 | orchestrator | 2026-03-23 01:14:05 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-23 01:14:23.914478 | orchestrator | 2026-03-23 01:14:05 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-23 01:14:23.914482 | orchestrator | 2026-03-23 01:14:05 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-23 01:14:23.914486 | orchestrator | 2026-03-23 01:14:05 | INFO  | Importing image Cirros 0.6.3 2026-03-23 01:14:23.914490 | orchestrator | 2026-03-23 01:14:05 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-23 01:14:23.914493 | orchestrator | 2026-03-23 01:14:06 | INFO  | Waiting for image to leave queued state... 2026-03-23 01:14:23.914497 | orchestrator | 2026-03-23 01:14:08 | INFO  | Waiting for import to complete... 2026-03-23 01:14:23.914512 | orchestrator | 2026-03-23 01:14:18 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-23 01:14:23.914516 | orchestrator | 2026-03-23 01:14:19 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-23 01:14:23.914520 | orchestrator | 2026-03-23 01:14:19 | INFO  | Setting internal_version = 0.6.3 2026-03-23 01:14:23.914524 | orchestrator | 2026-03-23 01:14:19 | INFO  | Setting image_original_user = cirros 2026-03-23 01:14:23.914528 | orchestrator | 2026-03-23 01:14:19 | INFO  | Adding tag os:cirros 2026-03-23 01:14:23.914531 | orchestrator | 2026-03-23 01:14:19 | INFO  | Setting property architecture: x86_64 2026-03-23 01:14:23.914535 | orchestrator | 2026-03-23 01:14:19 | INFO  | Setting property hw_disk_bus: scsi 2026-03-23 01:14:23.914539 | orchestrator | 2026-03-23 01:14:19 | INFO  | Setting property hw_rng_model: virtio 2026-03-23 01:14:23.914543 | orchestrator | 2026-03-23 01:14:19 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-23 01:14:23.914547 | orchestrator | 2026-03-23 01:14:20 | INFO  | Setting property hw_watchdog_action: reset 2026-03-23 01:14:23.914550 | orchestrator | 2026-03-23 01:14:20 | INFO  | Setting property hypervisor_type: qemu 2026-03-23 01:14:23.914554 | orchestrator | 2026-03-23 01:14:20 | INFO  | Setting property os_distro: cirros 2026-03-23 01:14:23.914558 | orchestrator | 2026-03-23 01:14:20 | INFO  | Setting property os_purpose: minimal 2026-03-23 01:14:23.914562 | orchestrator | 2026-03-23 01:14:20 | INFO  | Setting property replace_frequency: never 2026-03-23 01:14:23.914565 | orchestrator | 2026-03-23 01:14:21 | INFO  | Setting property uuid_validity: none 2026-03-23 01:14:23.914569 | orchestrator | 2026-03-23 01:14:21 | INFO  | Setting property provided_until: none 2026-03-23 01:14:23.914573 | orchestrator | 2026-03-23 01:14:21 | INFO  | Setting property image_description: Cirros 2026-03-23 01:14:23.914582 | orchestrator | 2026-03-23 01:14:21 | INFO  | Setting property image_name: Cirros 2026-03-23 01:14:23.914585 | orchestrator | 2026-03-23 01:14:21 | INFO  | Setting property internal_version: 0.6.3 2026-03-23 01:14:23.914589 | orchestrator | 2026-03-23 01:14:22 | INFO  | Setting property image_original_user: cirros 2026-03-23 01:14:23.914593 | orchestrator | 2026-03-23 01:14:22 | INFO  | Setting property os_version: 0.6.3 2026-03-23 01:14:23.914597 | orchestrator | 2026-03-23 01:14:22 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-23 01:14:23.914600 | orchestrator | 2026-03-23 01:14:22 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-23 01:14:23.914604 | orchestrator | 2026-03-23 01:14:22 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-23 01:14:23.914609 | orchestrator | 2026-03-23 01:14:22 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-23 01:14:23.914615 | orchestrator | 2026-03-23 01:14:22 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-23 01:14:24.203633 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-23 01:14:26.277785 | orchestrator | 2026-03-23 01:14:26 | INFO  | date: 2026-03-22 2026-03-23 01:14:26.277984 | orchestrator | 2026-03-23 01:14:26 | INFO  | image: octavia-amphora-haproxy-2024.2.20260322.qcow2 2026-03-23 01:14:26.278129 | orchestrator | 2026-03-23 01:14:26 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260322.qcow2 2026-03-23 01:14:26.278200 | orchestrator | 2026-03-23 01:14:26 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260322.qcow2.CHECKSUM 2026-03-23 01:14:26.485525 | orchestrator | 2026-03-23 01:14:26 | INFO  | checksum: d7f6f7762bbcb2f2f3458c9fe5e2daa9d4909d3c47d0ba0b8558bb4326707cc8 2026-03-23 01:14:26.587276 | orchestrator | 2026-03-23 01:14:26 | INFO  | It takes a moment until task 513b04b0-fac9-4832-b96d-8713c2ac5e33 (image-manager) has been started and output is visible here. 2026-03-23 01:15:18.133001 | orchestrator | 2026-03-23 01:14:28 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-22' 2026-03-23 01:15:18.133092 | orchestrator | 2026-03-23 01:14:28 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260322.qcow2: 200 2026-03-23 01:15:18.133104 | orchestrator | 2026-03-23 01:14:28 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-22 2026-03-23 01:15:18.133112 | orchestrator | 2026-03-23 01:14:28 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260322.qcow2 2026-03-23 01:15:18.133120 | orchestrator | 2026-03-23 01:14:30 | INFO  | Waiting for image to leave queued state... 2026-03-23 01:15:18.133128 | orchestrator | 2026-03-23 01:14:32 | INFO  | Waiting for import to complete... 2026-03-23 01:15:18.133135 | orchestrator | 2026-03-23 01:14:43 | INFO  | Waiting for import to complete... 2026-03-23 01:15:18.133143 | orchestrator | 2026-03-23 01:14:53 | INFO  | Waiting for import to complete... 2026-03-23 01:15:18.133151 | orchestrator | 2026-03-23 01:15:03 | INFO  | Waiting for import to complete... 2026-03-23 01:15:18.133159 | orchestrator | 2026-03-23 01:15:13 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-22' successfully completed, reloading images 2026-03-23 01:15:18.133165 | orchestrator | 2026-03-23 01:15:13 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-22' 2026-03-23 01:15:18.133186 | orchestrator | 2026-03-23 01:15:13 | INFO  | Setting internal_version = 2026-03-22 2026-03-23 01:15:18.133191 | orchestrator | 2026-03-23 01:15:13 | INFO  | Setting image_original_user = ubuntu 2026-03-23 01:15:18.133195 | orchestrator | 2026-03-23 01:15:13 | INFO  | Adding tag amphora 2026-03-23 01:15:18.133200 | orchestrator | 2026-03-23 01:15:14 | INFO  | Adding tag os:ubuntu 2026-03-23 01:15:18.133204 | orchestrator | 2026-03-23 01:15:14 | INFO  | Setting property architecture: x86_64 2026-03-23 01:15:18.133207 | orchestrator | 2026-03-23 01:15:14 | INFO  | Setting property hw_disk_bus: scsi 2026-03-23 01:15:18.133211 | orchestrator | 2026-03-23 01:15:14 | INFO  | Setting property hw_rng_model: virtio 2026-03-23 01:15:18.133215 | orchestrator | 2026-03-23 01:15:14 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-23 01:15:18.133278 | orchestrator | 2026-03-23 01:15:14 | INFO  | Setting property hw_watchdog_action: reset 2026-03-23 01:15:18.133289 | orchestrator | 2026-03-23 01:15:15 | INFO  | Setting property hypervisor_type: qemu 2026-03-23 01:15:18.133299 | orchestrator | 2026-03-23 01:15:15 | INFO  | Setting property os_distro: ubuntu 2026-03-23 01:15:18.133305 | orchestrator | 2026-03-23 01:15:15 | INFO  | Setting property replace_frequency: quarterly 2026-03-23 01:15:18.133312 | orchestrator | 2026-03-23 01:15:15 | INFO  | Setting property uuid_validity: last-1 2026-03-23 01:15:18.133319 | orchestrator | 2026-03-23 01:15:15 | INFO  | Setting property provided_until: none 2026-03-23 01:15:18.133325 | orchestrator | 2026-03-23 01:15:15 | INFO  | Setting property os_purpose: network 2026-03-23 01:15:18.133331 | orchestrator | 2026-03-23 01:15:16 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-23 01:15:18.133337 | orchestrator | 2026-03-23 01:15:16 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-23 01:15:18.133345 | orchestrator | 2026-03-23 01:15:16 | INFO  | Setting property internal_version: 2026-03-22 2026-03-23 01:15:18.133367 | orchestrator | 2026-03-23 01:15:16 | INFO  | Setting property image_original_user: ubuntu 2026-03-23 01:15:18.133373 | orchestrator | 2026-03-23 01:15:17 | INFO  | Setting property os_version: 2026-03-22 2026-03-23 01:15:18.133379 | orchestrator | 2026-03-23 01:15:17 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260322.qcow2 2026-03-23 01:15:18.133385 | orchestrator | 2026-03-23 01:15:17 | INFO  | Setting property image_build_date: 2026-03-22 2026-03-23 01:15:18.133390 | orchestrator | 2026-03-23 01:15:17 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-22' 2026-03-23 01:15:18.133396 | orchestrator | 2026-03-23 01:15:17 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-22' 2026-03-23 01:15:18.133403 | orchestrator | 2026-03-23 01:15:18 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-23 01:15:18.133429 | orchestrator | 2026-03-23 01:15:18 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-23 01:15:18.133436 | orchestrator | 2026-03-23 01:15:18 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-23 01:15:18.133443 | orchestrator | 2026-03-23 01:15:18 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-23 01:15:18.785207 | orchestrator | ok: Runtime: 0:02:47.248272 2026-03-23 01:15:18.808992 | 2026-03-23 01:15:18.809142 | TASK [Run checks] 2026-03-23 01:15:19.552379 | orchestrator | + set -e 2026-03-23 01:15:19.552511 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-23 01:15:19.552521 | orchestrator | ++ export INTERACTIVE=false 2026-03-23 01:15:19.552529 | orchestrator | ++ INTERACTIVE=false 2026-03-23 01:15:19.552535 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-23 01:15:19.552540 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-23 01:15:19.552545 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-23 01:15:19.553270 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-23 01:15:19.559009 | orchestrator | 2026-03-23 01:15:19.559102 | orchestrator | # CHECK 2026-03-23 01:15:19.559115 | orchestrator | 2026-03-23 01:15:19.559122 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-23 01:15:19.559133 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-23 01:15:19.559139 | orchestrator | + echo 2026-03-23 01:15:19.559146 | orchestrator | + echo '# CHECK' 2026-03-23 01:15:19.559152 | orchestrator | + echo 2026-03-23 01:15:19.559163 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-23 01:15:19.559209 | orchestrator | ++ semver latest 5.0.0 2026-03-23 01:15:19.601840 | orchestrator | 2026-03-23 01:15:19.601919 | orchestrator | ## Containers @ testbed-manager 2026-03-23 01:15:19.601926 | orchestrator | 2026-03-23 01:15:19.601940 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-23 01:15:19.601945 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-23 01:15:19.601949 | orchestrator | + echo 2026-03-23 01:15:19.601954 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-23 01:15:19.601958 | orchestrator | + echo 2026-03-23 01:15:19.601962 | orchestrator | + osism container testbed-manager ps 2026-03-23 01:15:20.614285 | orchestrator | 2026-03-23 01:15:20 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-23 01:15:20.986001 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-23 01:15:20.986114 | orchestrator | f3b9aea6b068 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2026-03-23 01:15:20.986132 | orchestrator | 009c41123482 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2026-03-23 01:15:20.986141 | orchestrator | 21f31c65de58 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-03-23 01:15:20.986145 | orchestrator | 21ea98791cb6 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-03-23 01:15:20.986152 | orchestrator | 588d49e8c765 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2026-03-23 01:15:20.986156 | orchestrator | c83652842198 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 16 minutes ago Up 15 minutes cephclient 2026-03-23 01:15:20.986160 | orchestrator | 73b5bcee665d registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-03-23 01:15:20.986164 | orchestrator | 8895e6938986 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-03-23 01:15:20.986185 | orchestrator | 3fb7aff04073 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-03-23 01:15:20.986190 | orchestrator | 742c2862543f phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 28 minutes ago Up 27 minutes (healthy) 80/tcp phpmyadmin 2026-03-23 01:15:20.986194 | orchestrator | e15767c435ae registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 29 minutes ago Up 28 minutes openstackclient 2026-03-23 01:15:20.986198 | orchestrator | e9023bcfc61e registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 29 minutes ago Up 28 minutes (healthy) 8080/tcp homer 2026-03-23 01:15:20.986202 | orchestrator | 2ce30dfd4e4c registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 51 minutes ago Up 51 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-23 01:15:20.986206 | orchestrator | 170565cbd8f1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 55 minutes ago Up 35 minutes (healthy) manager-inventory_reconciler-1 2026-03-23 01:15:20.986210 | orchestrator | eb3dff61ade8 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 55 minutes ago Up 35 minutes (healthy) osism-kubernetes 2026-03-23 01:15:20.986253 | orchestrator | 7c3d7377a545 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 55 minutes ago Up 35 minutes (healthy) ceph-ansible 2026-03-23 01:15:20.986262 | orchestrator | b8c625fc0684 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 55 minutes ago Up 35 minutes (healthy) kolla-ansible 2026-03-23 01:15:20.986266 | orchestrator | 9a9970587c77 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 55 minutes ago Up 35 minutes (healthy) osism-ansible 2026-03-23 01:15:20.986270 | orchestrator | 7636e25ca590 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 55 minutes ago Up 35 minutes (healthy) 8000/tcp manager-ara-server-1 2026-03-23 01:15:20.986274 | orchestrator | 40a3bdfd5eca registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 55 minutes ago Up 36 minutes (healthy) osismclient 2026-03-23 01:15:20.986278 | orchestrator | 8addaa8f96e1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 36 minutes (healthy) manager-listener-1 2026-03-23 01:15:20.986282 | orchestrator | 1c38bd3798e1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 36 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-23 01:15:20.986285 | orchestrator | 6df789ccc957 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 55 minutes ago Up 36 minutes (healthy) 3306/tcp manager-mariadb-1 2026-03-23 01:15:20.986293 | orchestrator | 3e6221602103 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 55 minutes ago Up 36 minutes (healthy) 6379/tcp manager-redis-1 2026-03-23 01:15:20.986297 | orchestrator | 2b2b4d09f94c registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 36 minutes (healthy) manager-flower-1 2026-03-23 01:15:20.986301 | orchestrator | c4bd19ebeb9a registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 36 minutes (healthy) manager-openstack-1 2026-03-23 01:15:20.986305 | orchestrator | c403e8efce8a registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 55 minutes ago Up 36 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-23 01:15:20.986309 | orchestrator | 02d887eaad74 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 36 minutes (healthy) manager-beat-1 2026-03-23 01:15:20.986313 | orchestrator | 1bdc0b5b12f9 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 57 minutes ago Up 57 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-23 01:15:21.082105 | orchestrator | 2026-03-23 01:15:21.082191 | orchestrator | ## Images @ testbed-manager 2026-03-23 01:15:21.082202 | orchestrator | 2026-03-23 01:15:21.082209 | orchestrator | + echo 2026-03-23 01:15:21.082234 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-23 01:15:21.082241 | orchestrator | + echo 2026-03-23 01:15:21.082251 | orchestrator | + osism container testbed-manager images 2026-03-23 01:15:22.366129 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-23 01:15:22.366245 | orchestrator | registry.osism.tech/osism/osism-ansible latest f3bbfa4d4a44 About an hour ago 634MB 2026-03-23 01:15:22.366259 | orchestrator | registry.osism.tech/osism/osism latest 335f8c6cf630 About an hour ago 408MB 2026-03-23 01:15:22.366266 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest d66cd53403aa About an hour ago 1.24GB 2026-03-23 01:15:22.366273 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 89e984f64e75 About an hour ago 631MB 2026-03-23 01:15:22.366304 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 7eefc71e8495 About an hour ago 585MB 2026-03-23 01:15:22.366312 | orchestrator | registry.osism.tech/osism/osism-frontend latest 5ee7182b1e3f About an hour ago 212MB 2026-03-23 01:15:22.366318 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 348d335ab38c About an hour ago 357MB 2026-03-23 01:15:22.366324 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 eb769477da0c 21 hours ago 239MB 2026-03-23 01:15:22.366330 | orchestrator | registry.osism.tech/osism/cephclient reef ef86eafabe7d 21 hours ago 453MB 2026-03-23 01:15:22.366336 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 30cd99beb087 24 hours ago 589MB 2026-03-23 01:15:22.366343 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e2cbc56089fc 24 hours ago 275MB 2026-03-23 01:15:22.366349 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f448c3e5523e 24 hours ago 677MB 2026-03-23 01:15:22.366355 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 9c387a10712c 24 hours ago 315MB 2026-03-23 01:15:22.366361 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 bab4eaf286c9 24 hours ago 413MB 2026-03-23 01:15:22.366387 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 d2e7482bb2cf 24 hours ago 317MB 2026-03-23 01:15:22.366395 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 b0c068cefe41 24 hours ago 849MB 2026-03-23 01:15:22.366401 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 04b8a53ed311 24 hours ago 367MB 2026-03-23 01:15:22.366407 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 7 weeks ago 41.4MB 2026-03-23 01:15:22.366413 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-23 01:15:22.366419 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-23 01:15:22.366425 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-23 01:15:22.366431 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 6 months ago 275MB 2026-03-23 01:15:22.366437 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-23 01:15:22.366444 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-23 01:15:22.449934 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-23 01:15:22.451119 | orchestrator | ++ semver latest 5.0.0 2026-03-23 01:15:22.492273 | orchestrator | 2026-03-23 01:15:22.492367 | orchestrator | ## Containers @ testbed-node-0 2026-03-23 01:15:22.492377 | orchestrator | 2026-03-23 01:15:22.492384 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-23 01:15:22.492391 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-23 01:15:22.492411 | orchestrator | + echo 2026-03-23 01:15:22.492418 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-23 01:15:22.492426 | orchestrator | + echo 2026-03-23 01:15:22.492435 | orchestrator | + osism container testbed-node-0 ps 2026-03-23 01:15:23.798397 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-23 01:15:23.798467 | orchestrator | 4f17fcb85b5c registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-23 01:15:23.798478 | orchestrator | 396d9db580a1 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-23 01:15:23.798487 | orchestrator | 0e3077e3b6fa registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-23 01:15:23.798495 | orchestrator | c600b6074894 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-23 01:15:23.798503 | orchestrator | 9990a789cc6e registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-03-23 01:15:23.798510 | orchestrator | d8ec1fc4b054 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2026-03-23 01:15:23.798517 | orchestrator | 921ad0271351 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2026-03-23 01:15:23.798535 | orchestrator | 1655abc24a47 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-03-23 01:15:23.798543 | orchestrator | e897fc2a99a0 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) placement_api 2026-03-23 01:15:23.798563 | orchestrator | 8c9b515514f6 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-03-23 01:15:23.798570 | orchestrator | 961fc1abc607 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_conductor 2026-03-23 01:15:23.798577 | orchestrator | fc616f17697b registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2026-03-23 01:15:23.798585 | orchestrator | b92aed766b6c registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2026-03-23 01:15:23.798592 | orchestrator | fd62329c97de registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2026-03-23 01:15:23.798599 | orchestrator | 047fa1481495 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-03-23 01:15:23.798606 | orchestrator | f062adef6d22 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-03-23 01:15:23.798613 | orchestrator | 2927d6cd839a registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-03-23 01:15:23.798621 | orchestrator | 748cdca69334 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-03-23 01:15:23.798628 | orchestrator | d969d705c465 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-03-23 01:15:23.798636 | orchestrator | 1c94309f96b0 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) barbican_worker 2026-03-23 01:15:23.798643 | orchestrator | 32a64f11177d registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-03-23 01:15:23.798661 | orchestrator | 4774018f94ea registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-03-23 01:15:23.798669 | orchestrator | 6493f8e6428c registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-03-23 01:15:23.798677 | orchestrator | d02c12861ac8 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-23 01:15:23.798683 | orchestrator | 61e8498ea17a registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-03-23 01:15:23.798693 | orchestrator | af0186455bd8 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-03-23 01:15:23.798701 | orchestrator | 68d54acd6907 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-03-23 01:15:23.798708 | orchestrator | 05b80b6ec4ac registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-03-23 01:15:23.798718 | orchestrator | 8b7956b46372 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-03-23 01:15:23.798730 | orchestrator | 2d93671270a5 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-03-23 01:15:23.798738 | orchestrator | 42d0b341ae19 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-03-23 01:15:23.798745 | orchestrator | 1e7d7578d798 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-03-23 01:15:23.798752 | orchestrator | 46b4ac92f442 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-03-23 01:15:23.798759 | orchestrator | c45adbc0b14e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2026-03-23 01:15:23.798767 | orchestrator | 3481a2656bfe registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-03-23 01:15:23.798774 | orchestrator | 929585a09aa3 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-03-23 01:15:23.798781 | orchestrator | b7fdbb79fefe registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-03-23 01:15:23.798788 | orchestrator | 35eb4397410d registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-03-23 01:15:23.798795 | orchestrator | 436761bdd81a registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 18 minutes ago Up 18 minutes (healthy) mariadb 2026-03-23 01:15:23.798802 | orchestrator | 5d0f665483b1 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-03-23 01:15:23.798810 | orchestrator | 3ed54364c256 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-03-23 01:15:23.798817 | orchestrator | 232f7e08ad4d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2026-03-23 01:15:23.798824 | orchestrator | dde8262f2e64 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-03-23 01:15:23.798831 | orchestrator | c6c267e4527d registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-03-23 01:15:23.798844 | orchestrator | b53b85d5f244 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) haproxy 2026-03-23 01:15:23.798851 | orchestrator | f5613d3365a5 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-03-23 01:15:23.798859 | orchestrator | 1479d84e40c1 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-03-23 01:15:23.798866 | orchestrator | 8627c5c8aa4e registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2026-03-23 01:15:23.798882 | orchestrator | 903466fcaa6b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 24 minutes ago Up 24 minutes ceph-mon-testbed-node-0 2026-03-23 01:15:23.798890 | orchestrator | e4eed6e81332 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-03-23 01:15:23.798897 | orchestrator | bc47387f72f4 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-03-23 01:15:23.798904 | orchestrator | 070f58e9bde2 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-03-23 01:15:23.798912 | orchestrator | b304536e9fba registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis_sentinel 2026-03-23 01:15:23.798919 | orchestrator | 4cb12a0060aa registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_db 2026-03-23 01:15:23.798930 | orchestrator | 00cf9d8e3ee0 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis 2026-03-23 01:15:23.798938 | orchestrator | b7788ad99ea7 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) memcached 2026-03-23 01:15:23.798945 | orchestrator | 47426add538c registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-03-23 01:15:23.798952 | orchestrator | f341853d7b16 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-03-23 01:15:23.798960 | orchestrator | 313e8c4c7dfe registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-03-23 01:15:23.887986 | orchestrator | 2026-03-23 01:15:23.888046 | orchestrator | ## Images @ testbed-node-0 2026-03-23 01:15:23.888056 | orchestrator | 2026-03-23 01:15:23.888063 | orchestrator | + echo 2026-03-23 01:15:23.888069 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-23 01:15:23.888077 | orchestrator | + echo 2026-03-23 01:15:23.888083 | orchestrator | + osism container testbed-node-0 images 2026-03-23 01:15:25.183554 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-23 01:15:25.183632 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 82093647655a 21 hours ago 1.35GB 2026-03-23 01:15:25.183649 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 30cd99beb087 24 hours ago 589MB 2026-03-23 01:15:25.183659 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e2cbc56089fc 24 hours ago 275MB 2026-03-23 01:15:25.183670 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 3811e7c7f992 24 hours ago 332MB 2026-03-23 01:15:25.183687 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 0be74f8c46ac 24 hours ago 426MB 2026-03-23 01:15:25.183697 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 b2316388d3b6 24 hours ago 1.04GB 2026-03-23 01:15:25.183707 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 4dba23057ed6 24 hours ago 286MB 2026-03-23 01:15:25.183718 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 abf4908e9e2a 24 hours ago 276MB 2026-03-23 01:15:25.183728 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 502db80e6751 24 hours ago 284MB 2026-03-23 01:15:25.183739 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 85bfb6202c31 24 hours ago 1.54GB 2026-03-23 01:15:25.183771 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 7774fb5e4654 24 hours ago 1.57GB 2026-03-23 01:15:25.183781 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f448c3e5523e 24 hours ago 677MB 2026-03-23 01:15:25.183791 | orchestrator | registry.osism.tech/kolla/redis 2024.2 b78cc21ac685 24 hours ago 282MB 2026-03-23 01:15:25.183800 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 b041eb128040 24 hours ago 282MB 2026-03-23 01:15:25.183810 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 9c387a10712c 24 hours ago 315MB 2026-03-23 01:15:25.183820 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 da194cdbb5ca 24 hours ago 308MB 2026-03-23 01:15:25.183843 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 21911ee74c13 24 hours ago 301MB 2026-03-23 01:15:25.183854 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 04b8a53ed311 24 hours ago 367MB 2026-03-23 01:15:25.183863 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 ad090537adfa 24 hours ago 310MB 2026-03-23 01:15:25.183874 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a5eb04a3045f 24 hours ago 288MB 2026-03-23 01:15:25.183884 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 39f0b58a0d91 24 hours ago 288MB 2026-03-23 01:15:25.183894 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 d8ce6b095ef7 24 hours ago 1.16GB 2026-03-23 01:15:25.183903 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 4cb07600a579 24 hours ago 462MB 2026-03-23 01:15:25.183914 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 95a8a2abd379 24 hours ago 850MB 2026-03-23 01:15:25.183923 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 da0d5e2acdfa 24 hours ago 850MB 2026-03-23 01:15:25.183932 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 f8772916baaf 24 hours ago 850MB 2026-03-23 01:15:25.183942 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 27dfcc81f118 24 hours ago 850MB 2026-03-23 01:15:25.183951 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f267b8793a51 24 hours ago 1.08GB 2026-03-23 01:15:25.183961 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 1170a1983f89 24 hours ago 1.05GB 2026-03-23 01:15:25.183971 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 9ad06a5f50af 24 hours ago 1.05GB 2026-03-23 01:15:25.183980 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 e086558ec6a3 24 hours ago 1.42GB 2026-03-23 01:15:25.184001 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 61a402d969b9 24 hours ago 1.73GB 2026-03-23 01:15:25.184020 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 857a283c7a95 24 hours ago 1.41GB 2026-03-23 01:15:25.184031 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 d7eb464cab66 24 hours ago 1.41GB 2026-03-23 01:15:25.184041 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 e5acd41f0cf0 24 hours ago 985MB 2026-03-23 01:15:25.184050 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 a4fb09f2a3a8 24 hours ago 1.04GB 2026-03-23 01:15:25.184082 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c37fbc4ee6c9 24 hours ago 1.04GB 2026-03-23 01:15:25.184094 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 5b8d63dd9415 24 hours ago 1.06GB 2026-03-23 01:15:25.184104 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 cb6e726acb65 24 hours ago 1.04GB 2026-03-23 01:15:25.184113 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 cd8e3196ccfc 24 hours ago 1.06GB 2026-03-23 01:15:25.184133 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 04f236f13cef 24 hours ago 1.11GB 2026-03-23 01:15:25.184143 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 a75efb818fda 24 hours ago 999MB 2026-03-23 01:15:25.184153 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 43873e3b7304 24 hours ago 1.06GB 2026-03-23 01:15:25.184163 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 757cd738eb57 24 hours ago 985MB 2026-03-23 01:15:25.184172 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 9de18079dbde 24 hours ago 986MB 2026-03-23 01:15:25.184188 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 39296444e2d9 24 hours ago 998MB 2026-03-23 01:15:25.184199 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 4d52225d6d14 24 hours ago 998MB 2026-03-23 01:15:25.184275 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 74b369e64cbe 24 hours ago 994MB 2026-03-23 01:15:25.184290 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f9bdea9b8d98 24 hours ago 994MB 2026-03-23 01:15:25.184301 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 05817b702bcb 24 hours ago 994MB 2026-03-23 01:15:25.184311 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 d9d41b3618ae 24 hours ago 993MB 2026-03-23 01:15:25.184321 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 338a6490b571 24 hours ago 1.14GB 2026-03-23 01:15:25.184331 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 c38da76b903f 24 hours ago 1.25GB 2026-03-23 01:15:25.184340 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 14546f610bff 24 hours ago 1.38GB 2026-03-23 01:15:25.184349 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 62a6fd97c66b 24 hours ago 1.22GB 2026-03-23 01:15:25.184360 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 534a15fb91dd 24 hours ago 1.22GB 2026-03-23 01:15:25.184369 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 0f5c982c4528 24 hours ago 1.22GB 2026-03-23 01:15:25.184380 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 2fee874dadd9 24 hours ago 984MB 2026-03-23 01:15:25.184389 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 7fe754fbdadd 24 hours ago 984MB 2026-03-23 01:15:25.184399 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 b442196dfcef 24 hours ago 983MB 2026-03-23 01:15:25.184408 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 a8d91aa39750 24 hours ago 984MB 2026-03-23 01:15:25.184419 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 27119865b222 24 hours ago 1.17GB 2026-03-23 01:15:25.184428 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 088a62ec001e 24 hours ago 1GB 2026-03-23 01:15:25.184438 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 a4639d4c0002 24 hours ago 1e+03MB 2026-03-23 01:15:25.184447 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 bcab190eb256 24 hours ago 1GB 2026-03-23 01:15:25.293616 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-23 01:15:25.293987 | orchestrator | ++ semver latest 5.0.0 2026-03-23 01:15:25.340612 | orchestrator | 2026-03-23 01:15:25.340660 | orchestrator | ## Containers @ testbed-node-1 2026-03-23 01:15:25.340666 | orchestrator | 2026-03-23 01:15:25.340671 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-23 01:15:25.340675 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-23 01:15:25.340679 | orchestrator | + echo 2026-03-23 01:15:25.340683 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-23 01:15:25.340688 | orchestrator | + echo 2026-03-23 01:15:25.340692 | orchestrator | + osism container testbed-node-1 ps 2026-03-23 01:15:26.674304 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-23 01:15:26.674347 | orchestrator | e99d9e161ca1 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-23 01:15:26.674353 | orchestrator | 44d393671bca registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-23 01:15:26.674357 | orchestrator | 47ec0617777e registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-23 01:15:26.674360 | orchestrator | 8e2cefc90de1 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-23 01:15:26.674363 | orchestrator | f76366f8045e registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-03-23 01:15:26.674375 | orchestrator | 4c76d3b2f5da registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-23 01:15:26.674378 | orchestrator | 19601e007ba4 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2026-03-23 01:15:26.674381 | orchestrator | 8abcc62f6aa9 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2026-03-23 01:15:26.674386 | orchestrator | 98a790f36958 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) placement_api 2026-03-23 01:15:26.674389 | orchestrator | e61cd75d0d64 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-03-23 01:15:26.674392 | orchestrator | f20673e7af81 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_conductor 2026-03-23 01:15:26.674396 | orchestrator | 49dcc4191dd2 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2026-03-23 01:15:26.674399 | orchestrator | 21f53b1c0b01 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2026-03-23 01:15:26.674402 | orchestrator | c57f7134f8e2 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2026-03-23 01:15:26.674405 | orchestrator | 75a44b4e6cf9 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-03-23 01:15:26.674408 | orchestrator | 73959a2d9709 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-03-23 01:15:26.674412 | orchestrator | ac051305704c registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-03-23 01:15:26.674415 | orchestrator | c8318165c5fc registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-03-23 01:15:26.674418 | orchestrator | cafecdc66965 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-03-23 01:15:26.674430 | orchestrator | 14d794167362 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-03-23 01:15:26.674433 | orchestrator | 1df71df06458 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) barbican_worker 2026-03-23 01:15:26.674443 | orchestrator | f4dfb520c124 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-03-23 01:15:26.674446 | orchestrator | c2b105648783 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-03-23 01:15:26.674449 | orchestrator | 5b0e8f3618c1 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-23 01:15:26.674452 | orchestrator | a4b5abe84dc4 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-03-23 01:15:26.674455 | orchestrator | 3e27bb685dc8 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-03-23 01:15:26.674459 | orchestrator | af8fa6527f55 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-03-23 01:15:26.674465 | orchestrator | 12e21a6996a4 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-03-23 01:15:26.674468 | orchestrator | 3e0593e79139 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-03-23 01:15:26.674471 | orchestrator | 1b2a253bd979 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-03-23 01:15:26.674474 | orchestrator | 495b534a041a registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-03-23 01:15:26.674477 | orchestrator | b8e8b696a395 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-03-23 01:15:26.674480 | orchestrator | cdddd66a1413 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-03-23 01:15:26.674483 | orchestrator | 42c6aa71e8b9 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2026-03-23 01:15:26.674486 | orchestrator | eb87322c4a81 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-03-23 01:15:26.674490 | orchestrator | 05299fb73acc registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-03-23 01:15:26.674493 | orchestrator | 3a29bfb72a1f registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) horizon 2026-03-23 01:15:26.674496 | orchestrator | 046abb05cfbc registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-03-23 01:15:26.674499 | orchestrator | 87963bcd77d4 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) opensearch_dashboards 2026-03-23 01:15:26.674504 | orchestrator | fe8392290a2f registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch 2026-03-23 01:15:26.674507 | orchestrator | 8cb90dbec9fe registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-03-23 01:15:26.674510 | orchestrator | 5e377448808c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2026-03-23 01:15:26.674514 | orchestrator | 42d9354ec4ee registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-03-23 01:15:26.674517 | orchestrator | 29231bc31ff6 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-03-23 01:15:26.674522 | orchestrator | 91f94360a15f registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) haproxy 2026-03-23 01:15:26.674526 | orchestrator | d5f3ee14b8c5 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-03-23 01:15:26.674529 | orchestrator | cb5be5686816 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-03-23 01:15:26.674532 | orchestrator | 308cba670e29 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2026-03-23 01:15:26.674535 | orchestrator | 6b38d13d71f1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-1 2026-03-23 01:15:26.674538 | orchestrator | a8c1b5d43793 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-03-23 01:15:26.674541 | orchestrator | 36e43c4b692c registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-03-23 01:15:26.674544 | orchestrator | 4ac0e2fa82ef registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-03-23 01:15:26.674549 | orchestrator | f664686a1fb1 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis_sentinel 2026-03-23 01:15:26.674552 | orchestrator | cb99039ad0f5 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_db 2026-03-23 01:15:26.674556 | orchestrator | 66b2b47eb55d registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis 2026-03-23 01:15:26.674559 | orchestrator | 1a71b4467971 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) memcached 2026-03-23 01:15:26.674562 | orchestrator | 9fd4464a4d05 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-03-23 01:15:26.674565 | orchestrator | 49a3196d942a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-03-23 01:15:26.674570 | orchestrator | 0fdf395a2956 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-03-23 01:15:26.812361 | orchestrator | 2026-03-23 01:15:26.812416 | orchestrator | ## Images @ testbed-node-1 2026-03-23 01:15:26.812425 | orchestrator | 2026-03-23 01:15:26.812431 | orchestrator | + echo 2026-03-23 01:15:26.812438 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-23 01:15:26.812445 | orchestrator | + echo 2026-03-23 01:15:26.812452 | orchestrator | + osism container testbed-node-1 images 2026-03-23 01:15:28.225326 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-23 01:15:28.225389 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 82093647655a 21 hours ago 1.35GB 2026-03-23 01:15:28.225396 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 30cd99beb087 24 hours ago 589MB 2026-03-23 01:15:28.225400 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e2cbc56089fc 24 hours ago 275MB 2026-03-23 01:15:28.225405 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 3811e7c7f992 24 hours ago 332MB 2026-03-23 01:15:28.225409 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 0be74f8c46ac 24 hours ago 426MB 2026-03-23 01:15:28.225413 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 b2316388d3b6 24 hours ago 1.04GB 2026-03-23 01:15:28.225418 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 4dba23057ed6 24 hours ago 286MB 2026-03-23 01:15:28.225422 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 abf4908e9e2a 24 hours ago 276MB 2026-03-23 01:15:28.225426 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 502db80e6751 24 hours ago 284MB 2026-03-23 01:15:28.225430 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 85bfb6202c31 24 hours ago 1.54GB 2026-03-23 01:15:28.225435 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 7774fb5e4654 24 hours ago 1.57GB 2026-03-23 01:15:28.225439 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f448c3e5523e 24 hours ago 677MB 2026-03-23 01:15:28.225443 | orchestrator | registry.osism.tech/kolla/redis 2024.2 b78cc21ac685 24 hours ago 282MB 2026-03-23 01:15:28.225447 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 b041eb128040 24 hours ago 282MB 2026-03-23 01:15:28.225452 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 9c387a10712c 24 hours ago 315MB 2026-03-23 01:15:28.225456 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 da194cdbb5ca 24 hours ago 308MB 2026-03-23 01:15:28.225460 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 21911ee74c13 24 hours ago 301MB 2026-03-23 01:15:28.225465 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 04b8a53ed311 24 hours ago 367MB 2026-03-23 01:15:28.225469 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 ad090537adfa 24 hours ago 310MB 2026-03-23 01:15:28.225473 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a5eb04a3045f 24 hours ago 288MB 2026-03-23 01:15:28.225477 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 39f0b58a0d91 24 hours ago 288MB 2026-03-23 01:15:28.225482 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 d8ce6b095ef7 24 hours ago 1.16GB 2026-03-23 01:15:28.225486 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 4cb07600a579 24 hours ago 462MB 2026-03-23 01:15:28.225490 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 95a8a2abd379 24 hours ago 850MB 2026-03-23 01:15:28.225494 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 da0d5e2acdfa 24 hours ago 850MB 2026-03-23 01:15:28.225513 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 f8772916baaf 24 hours ago 850MB 2026-03-23 01:15:28.225517 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 27dfcc81f118 24 hours ago 850MB 2026-03-23 01:15:28.225522 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f267b8793a51 24 hours ago 1.08GB 2026-03-23 01:15:28.225526 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 1170a1983f89 24 hours ago 1.05GB 2026-03-23 01:15:28.225530 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 9ad06a5f50af 24 hours ago 1.05GB 2026-03-23 01:15:28.225535 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 e086558ec6a3 24 hours ago 1.42GB 2026-03-23 01:15:28.225539 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 61a402d969b9 24 hours ago 1.73GB 2026-03-23 01:15:28.225543 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 857a283c7a95 24 hours ago 1.41GB 2026-03-23 01:15:28.225556 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 d7eb464cab66 24 hours ago 1.41GB 2026-03-23 01:15:28.225560 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 e5acd41f0cf0 24 hours ago 985MB 2026-03-23 01:15:28.225564 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 a4fb09f2a3a8 24 hours ago 1.04GB 2026-03-23 01:15:28.225578 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c37fbc4ee6c9 24 hours ago 1.04GB 2026-03-23 01:15:28.225582 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 5b8d63dd9415 24 hours ago 1.06GB 2026-03-23 01:15:28.225587 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 cb6e726acb65 24 hours ago 1.04GB 2026-03-23 01:15:28.225591 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 cd8e3196ccfc 24 hours ago 1.06GB 2026-03-23 01:15:28.225595 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 04f236f13cef 24 hours ago 1.11GB 2026-03-23 01:15:28.225599 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 39296444e2d9 24 hours ago 998MB 2026-03-23 01:15:28.225603 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 4d52225d6d14 24 hours ago 998MB 2026-03-23 01:15:28.225608 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 74b369e64cbe 24 hours ago 994MB 2026-03-23 01:15:28.225612 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f9bdea9b8d98 24 hours ago 994MB 2026-03-23 01:15:28.225616 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 05817b702bcb 24 hours ago 994MB 2026-03-23 01:15:28.225620 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 d9d41b3618ae 24 hours ago 993MB 2026-03-23 01:15:28.225624 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 338a6490b571 24 hours ago 1.14GB 2026-03-23 01:15:28.225628 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 c38da76b903f 24 hours ago 1.25GB 2026-03-23 01:15:28.225633 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 14546f610bff 24 hours ago 1.38GB 2026-03-23 01:15:28.225637 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 62a6fd97c66b 24 hours ago 1.22GB 2026-03-23 01:15:28.225641 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 534a15fb91dd 24 hours ago 1.22GB 2026-03-23 01:15:28.225645 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 0f5c982c4528 24 hours ago 1.22GB 2026-03-23 01:15:28.225649 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 27119865b222 24 hours ago 1.17GB 2026-03-23 01:15:28.225654 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 088a62ec001e 24 hours ago 1GB 2026-03-23 01:15:28.225661 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 a4639d4c0002 24 hours ago 1e+03MB 2026-03-23 01:15:28.225665 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 bcab190eb256 24 hours ago 1GB 2026-03-23 01:15:28.362756 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-23 01:15:28.363316 | orchestrator | ++ semver latest 5.0.0 2026-03-23 01:15:28.419090 | orchestrator | 2026-03-23 01:15:28.419145 | orchestrator | ## Containers @ testbed-node-2 2026-03-23 01:15:28.419154 | orchestrator | 2026-03-23 01:15:28.419162 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-23 01:15:28.419169 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-23 01:15:28.419176 | orchestrator | + echo 2026-03-23 01:15:28.419183 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-23 01:15:28.419191 | orchestrator | + echo 2026-03-23 01:15:28.419198 | orchestrator | + osism container testbed-node-2 ps 2026-03-23 01:15:29.867329 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-23 01:15:29.867462 | orchestrator | f45de2b25879 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-23 01:15:29.867478 | orchestrator | 6500541061a5 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-23 01:15:29.867483 | orchestrator | ea552068f2fe registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-23 01:15:29.867528 | orchestrator | 419e8afc07e1 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-23 01:15:29.867534 | orchestrator | 0607b6440d14 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-03-23 01:15:29.867539 | orchestrator | 7a0fc0b771b8 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-23 01:15:29.867544 | orchestrator | ed45f48cc946 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2026-03-23 01:15:29.867549 | orchestrator | be9e9a30e8a8 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2026-03-23 01:15:29.867554 | orchestrator | ddb8b576c572 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) placement_api 2026-03-23 01:15:29.867558 | orchestrator | a75d85a87bc9 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-03-23 01:15:29.867562 | orchestrator | 24200c7b1dfd registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_conductor 2026-03-23 01:15:29.867566 | orchestrator | 9db166d8105f registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2026-03-23 01:15:29.867570 | orchestrator | a770e095529e registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2026-03-23 01:15:29.867574 | orchestrator | 4633781a995f registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-03-23 01:15:29.867599 | orchestrator | f687202092b8 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-03-23 01:15:29.867625 | orchestrator | ad9ae3b9e858 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-03-23 01:15:29.867629 | orchestrator | 58b83841de69 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-03-23 01:15:29.867633 | orchestrator | 7b840a7a57e4 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-03-23 01:15:29.867637 | orchestrator | c4bfa0f3988f registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-03-23 01:15:29.867641 | orchestrator | 11197bf2f579 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-03-23 01:15:29.867644 | orchestrator | ebf9319f6f97 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-03-23 01:15:29.867663 | orchestrator | e6f1b3f18f3f registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-03-23 01:15:29.867667 | orchestrator | 1a3c69b38bba registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-03-23 01:15:29.867671 | orchestrator | e4857ba12106 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-23 01:15:29.867675 | orchestrator | 362daab958a2 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-03-23 01:15:29.867679 | orchestrator | fcceb664adcb registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-03-23 01:15:29.867682 | orchestrator | efc68ca1c39f registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-03-23 01:15:29.867688 | orchestrator | 743555f56401 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-03-23 01:15:29.867691 | orchestrator | b39c9f169a5b registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-03-23 01:15:29.867695 | orchestrator | c561db9cfafe registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-03-23 01:15:29.867699 | orchestrator | 7015fd1b0419 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-03-23 01:15:29.867703 | orchestrator | b824cd43848b registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-03-23 01:15:29.867706 | orchestrator | 1047aea6350b registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2026-03-23 01:15:29.867710 | orchestrator | 95ee2bce8a0b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 14 minutes ago Up 14 minutes ceph-mgr-testbed-node-2 2026-03-23 01:15:29.867719 | orchestrator | 6a2ee70511f7 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-03-23 01:15:29.867722 | orchestrator | 7b7e3e966c54 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-03-23 01:15:29.867726 | orchestrator | 74a495e1dc8e registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) horizon 2026-03-23 01:15:29.867731 | orchestrator | 0c7a58fb29eb registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-03-23 01:15:29.867737 | orchestrator | 9a3c5f76f446 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) opensearch_dashboards 2026-03-23 01:15:29.867742 | orchestrator | 8064916764d3 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-03-23 01:15:29.867748 | orchestrator | 9e3ce07478ab registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch 2026-03-23 01:15:29.867753 | orchestrator | 2eb189a11757 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-2 2026-03-23 01:15:29.867763 | orchestrator | b83f22eca175 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-03-23 01:15:29.867769 | orchestrator | 46b678f46101 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-03-23 01:15:29.867782 | orchestrator | d07fb4914644 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) haproxy 2026-03-23 01:15:29.867789 | orchestrator | fbbae3d8f92e registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-03-23 01:15:29.867795 | orchestrator | 61067cf51a8d registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-03-23 01:15:29.867801 | orchestrator | f6bef310aac5 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2026-03-23 01:15:29.867807 | orchestrator | 23cf77fa1d81 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) rabbitmq 2026-03-23 01:15:29.867818 | orchestrator | cf3db6eb1c0a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-2 2026-03-23 01:15:29.867824 | orchestrator | 0b0bd65c31db registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-03-23 01:15:29.867829 | orchestrator | 9c06383b3bbf registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-03-23 01:15:29.867836 | orchestrator | b40feaa4c9d2 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_db 2026-03-23 01:15:29.867842 | orchestrator | dc6fd11afc86 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis_sentinel 2026-03-23 01:15:29.867854 | orchestrator | 077a5059867f registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis 2026-03-23 01:15:29.867860 | orchestrator | 14fcb8d528f4 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) memcached 2026-03-23 01:15:29.867866 | orchestrator | 32e43cdf6a38 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-03-23 01:15:29.867872 | orchestrator | 6ce887d1c83b registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-03-23 01:15:29.867877 | orchestrator | b41162482a36 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-03-23 01:15:30.012436 | orchestrator | 2026-03-23 01:15:30.012632 | orchestrator | ## Images @ testbed-node-2 2026-03-23 01:15:30.012650 | orchestrator | 2026-03-23 01:15:30.012657 | orchestrator | + echo 2026-03-23 01:15:30.012663 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-23 01:15:30.012671 | orchestrator | + echo 2026-03-23 01:15:30.012737 | orchestrator | + osism container testbed-node-2 images 2026-03-23 01:15:31.505818 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-23 01:15:31.505924 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 82093647655a 21 hours ago 1.35GB 2026-03-23 01:15:31.505936 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 30cd99beb087 24 hours ago 589MB 2026-03-23 01:15:31.505959 | orchestrator | registry.osism.tech/kolla/cron 2024.2 e2cbc56089fc 24 hours ago 275MB 2026-03-23 01:15:31.505966 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 3811e7c7f992 24 hours ago 332MB 2026-03-23 01:15:31.505973 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 0be74f8c46ac 24 hours ago 426MB 2026-03-23 01:15:31.505979 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 b2316388d3b6 24 hours ago 1.04GB 2026-03-23 01:15:31.505985 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 4dba23057ed6 24 hours ago 286MB 2026-03-23 01:15:31.505990 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 abf4908e9e2a 24 hours ago 276MB 2026-03-23 01:15:31.505996 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 502db80e6751 24 hours ago 284MB 2026-03-23 01:15:31.506003 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 85bfb6202c31 24 hours ago 1.54GB 2026-03-23 01:15:31.506008 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 7774fb5e4654 24 hours ago 1.57GB 2026-03-23 01:15:31.506072 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f448c3e5523e 24 hours ago 677MB 2026-03-23 01:15:31.506079 | orchestrator | registry.osism.tech/kolla/redis 2024.2 b78cc21ac685 24 hours ago 282MB 2026-03-23 01:15:31.506086 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 b041eb128040 24 hours ago 282MB 2026-03-23 01:15:31.506092 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 9c387a10712c 24 hours ago 315MB 2026-03-23 01:15:31.506098 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 da194cdbb5ca 24 hours ago 308MB 2026-03-23 01:15:31.506104 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 21911ee74c13 24 hours ago 301MB 2026-03-23 01:15:31.506110 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 04b8a53ed311 24 hours ago 367MB 2026-03-23 01:15:31.506117 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a5eb04a3045f 24 hours ago 288MB 2026-03-23 01:15:31.506147 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 ad090537adfa 24 hours ago 310MB 2026-03-23 01:15:31.506154 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 39f0b58a0d91 24 hours ago 288MB 2026-03-23 01:15:31.506160 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 d8ce6b095ef7 24 hours ago 1.16GB 2026-03-23 01:15:31.506166 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 4cb07600a579 24 hours ago 462MB 2026-03-23 01:15:31.506172 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 95a8a2abd379 24 hours ago 850MB 2026-03-23 01:15:31.506180 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 da0d5e2acdfa 24 hours ago 850MB 2026-03-23 01:15:31.506186 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 f8772916baaf 24 hours ago 850MB 2026-03-23 01:15:31.506192 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 27dfcc81f118 24 hours ago 850MB 2026-03-23 01:15:31.506222 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f267b8793a51 24 hours ago 1.08GB 2026-03-23 01:15:31.506228 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 1170a1983f89 24 hours ago 1.05GB 2026-03-23 01:15:31.506234 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 9ad06a5f50af 24 hours ago 1.05GB 2026-03-23 01:15:31.506240 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 e086558ec6a3 24 hours ago 1.42GB 2026-03-23 01:15:31.506247 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 61a402d969b9 24 hours ago 1.73GB 2026-03-23 01:15:31.506252 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 857a283c7a95 24 hours ago 1.41GB 2026-03-23 01:15:31.506258 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 d7eb464cab66 24 hours ago 1.41GB 2026-03-23 01:15:31.506263 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 e5acd41f0cf0 24 hours ago 985MB 2026-03-23 01:15:31.506269 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 a4fb09f2a3a8 24 hours ago 1.04GB 2026-03-23 01:15:31.506302 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c37fbc4ee6c9 24 hours ago 1.04GB 2026-03-23 01:15:31.506309 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 5b8d63dd9415 24 hours ago 1.06GB 2026-03-23 01:15:31.506314 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 cb6e726acb65 24 hours ago 1.04GB 2026-03-23 01:15:31.506320 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 cd8e3196ccfc 24 hours ago 1.06GB 2026-03-23 01:15:31.506326 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 04f236f13cef 24 hours ago 1.11GB 2026-03-23 01:15:31.506332 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 39296444e2d9 24 hours ago 998MB 2026-03-23 01:15:31.506338 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 4d52225d6d14 24 hours ago 998MB 2026-03-23 01:15:31.506345 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 74b369e64cbe 24 hours ago 994MB 2026-03-23 01:15:31.506351 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f9bdea9b8d98 24 hours ago 994MB 2026-03-23 01:15:31.506358 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 05817b702bcb 24 hours ago 994MB 2026-03-23 01:15:31.506365 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 d9d41b3618ae 24 hours ago 993MB 2026-03-23 01:15:31.506372 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 338a6490b571 24 hours ago 1.14GB 2026-03-23 01:15:31.506390 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 c38da76b903f 24 hours ago 1.25GB 2026-03-23 01:15:31.506406 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 14546f610bff 24 hours ago 1.38GB 2026-03-23 01:15:31.506413 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 62a6fd97c66b 24 hours ago 1.22GB 2026-03-23 01:15:31.506420 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 534a15fb91dd 24 hours ago 1.22GB 2026-03-23 01:15:31.506427 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 0f5c982c4528 24 hours ago 1.22GB 2026-03-23 01:15:31.506433 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 27119865b222 24 hours ago 1.17GB 2026-03-23 01:15:31.506440 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 088a62ec001e 24 hours ago 1GB 2026-03-23 01:15:31.506446 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 a4639d4c0002 24 hours ago 1e+03MB 2026-03-23 01:15:31.506453 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 bcab190eb256 24 hours ago 1GB 2026-03-23 01:15:31.657905 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-23 01:15:31.664347 | orchestrator | + set -e 2026-03-23 01:15:31.664415 | orchestrator | + source /opt/manager-vars.sh 2026-03-23 01:15:31.666760 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-23 01:15:31.666839 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-23 01:15:31.666850 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-23 01:15:31.666855 | orchestrator | ++ CEPH_VERSION=reef 2026-03-23 01:15:31.666859 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-23 01:15:31.666864 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-23 01:15:31.666868 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-23 01:15:31.666872 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-23 01:15:31.666876 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-23 01:15:31.666880 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-23 01:15:31.666884 | orchestrator | ++ export ARA=false 2026-03-23 01:15:31.666888 | orchestrator | ++ ARA=false 2026-03-23 01:15:31.666892 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-23 01:15:31.666896 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-23 01:15:31.666900 | orchestrator | ++ export TEMPEST=true 2026-03-23 01:15:31.666904 | orchestrator | ++ TEMPEST=true 2026-03-23 01:15:31.666907 | orchestrator | ++ export IS_ZUUL=true 2026-03-23 01:15:31.666911 | orchestrator | ++ IS_ZUUL=true 2026-03-23 01:15:31.666915 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.169 2026-03-23 01:15:31.666918 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.169 2026-03-23 01:15:31.666922 | orchestrator | ++ export EXTERNAL_API=false 2026-03-23 01:15:31.666926 | orchestrator | ++ EXTERNAL_API=false 2026-03-23 01:15:31.666929 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-23 01:15:31.666933 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-23 01:15:31.666937 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-23 01:15:31.666940 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-23 01:15:31.666944 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-23 01:15:31.666948 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-23 01:15:31.666951 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-23 01:15:31.666955 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-23 01:15:31.676498 | orchestrator | + set -e 2026-03-23 01:15:31.677063 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-23 01:15:31.677101 | orchestrator | ++ export INTERACTIVE=false 2026-03-23 01:15:31.677111 | orchestrator | ++ INTERACTIVE=false 2026-03-23 01:15:31.677117 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-23 01:15:31.677123 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-23 01:15:31.677129 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-23 01:15:31.677527 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-23 01:15:31.682524 | orchestrator | 2026-03-23 01:15:31.682576 | orchestrator | # Ceph status 2026-03-23 01:15:31.682581 | orchestrator | 2026-03-23 01:15:31.682586 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-23 01:15:31.682591 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-23 01:15:31.682595 | orchestrator | + echo 2026-03-23 01:15:31.682599 | orchestrator | + echo '# Ceph status' 2026-03-23 01:15:31.682603 | orchestrator | + echo 2026-03-23 01:15:31.682607 | orchestrator | + ceph -s 2026-03-23 01:15:32.233722 | orchestrator | cluster: 2026-03-23 01:15:32.233834 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-23 01:15:32.233844 | orchestrator | health: HEALTH_OK 2026-03-23 01:15:32.233849 | orchestrator | 2026-03-23 01:15:32.233853 | orchestrator | services: 2026-03-23 01:15:32.233858 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 24m) 2026-03-23 01:15:32.233870 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-1, testbed-node-2 2026-03-23 01:15:32.233875 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-23 01:15:32.233880 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 22m) 2026-03-23 01:15:32.233884 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-23 01:15:32.233888 | orchestrator | 2026-03-23 01:15:32.233892 | orchestrator | data: 2026-03-23 01:15:32.233896 | orchestrator | volumes: 1/1 healthy 2026-03-23 01:15:32.233899 | orchestrator | pools: 14 pools, 401 pgs 2026-03-23 01:15:32.233903 | orchestrator | objects: 556 objects, 2.2 GiB 2026-03-23 01:15:32.233907 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-23 01:15:32.233911 | orchestrator | pgs: 401 active+clean 2026-03-23 01:15:32.233915 | orchestrator | 2026-03-23 01:15:32.288899 | orchestrator | 2026-03-23 01:15:32.288969 | orchestrator | # Ceph versions 2026-03-23 01:15:32.288975 | orchestrator | 2026-03-23 01:15:32.288979 | orchestrator | + echo 2026-03-23 01:15:32.288983 | orchestrator | + echo '# Ceph versions' 2026-03-23 01:15:32.288988 | orchestrator | + echo 2026-03-23 01:15:32.288992 | orchestrator | + ceph versions 2026-03-23 01:15:32.893190 | orchestrator | { 2026-03-23 01:15:32.893336 | orchestrator | "mon": { 2026-03-23 01:15:32.893350 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-23 01:15:32.893358 | orchestrator | }, 2026-03-23 01:15:32.893364 | orchestrator | "mgr": { 2026-03-23 01:15:32.893387 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-23 01:15:32.893394 | orchestrator | }, 2026-03-23 01:15:32.893400 | orchestrator | "osd": { 2026-03-23 01:15:32.893406 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-03-23 01:15:32.893412 | orchestrator | }, 2026-03-23 01:15:32.893419 | orchestrator | "mds": { 2026-03-23 01:15:32.893425 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-23 01:15:32.893432 | orchestrator | }, 2026-03-23 01:15:32.893438 | orchestrator | "rgw": { 2026-03-23 01:15:32.893445 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-23 01:15:32.893451 | orchestrator | }, 2026-03-23 01:15:32.893458 | orchestrator | "overall": { 2026-03-23 01:15:32.893464 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-03-23 01:15:32.893470 | orchestrator | } 2026-03-23 01:15:32.893476 | orchestrator | } 2026-03-23 01:15:32.949026 | orchestrator | 2026-03-23 01:15:32.949103 | orchestrator | # Ceph OSD tree 2026-03-23 01:15:32.949115 | orchestrator | 2026-03-23 01:15:32.949121 | orchestrator | + echo 2026-03-23 01:15:32.949128 | orchestrator | + echo '# Ceph OSD tree' 2026-03-23 01:15:32.949135 | orchestrator | + echo 2026-03-23 01:15:32.949140 | orchestrator | + ceph osd df tree 2026-03-23 01:15:33.466110 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-23 01:15:33.466252 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 421 MiB 113 GiB 5.91 1.00 - root default 2026-03-23 01:15:33.466263 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2026-03-23 01:15:33.466268 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.37 1.08 192 up osd.2 2026-03-23 01:15:33.466272 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.44 0.92 200 up osd.3 2026-03-23 01:15:33.466277 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-4 2026-03-23 01:15:33.466281 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.89 1.17 192 up osd.0 2026-03-23 01:15:33.466285 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1008 MiB 939 MiB 1 KiB 70 MiB 19 GiB 4.93 0.83 196 up osd.5 2026-03-23 01:15:33.466311 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-03-23 01:15:33.466315 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.20 1.22 204 up osd.1 2026-03-23 01:15:33.466319 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 948 MiB 875 MiB 1 KiB 74 MiB 19 GiB 4.63 0.78 186 up osd.4 2026-03-23 01:15:33.466323 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 421 MiB 113 GiB 5.91 2026-03-23 01:15:33.466329 | orchestrator | MIN/MAX VAR: 0.78/1.22 STDDEV: 0.97 2026-03-23 01:15:33.510808 | orchestrator | 2026-03-23 01:15:33.510879 | orchestrator | # Ceph monitor status 2026-03-23 01:15:33.510886 | orchestrator | 2026-03-23 01:15:33.510890 | orchestrator | + echo 2026-03-23 01:15:33.510894 | orchestrator | + echo '# Ceph monitor status' 2026-03-23 01:15:33.510899 | orchestrator | + echo 2026-03-23 01:15:33.510903 | orchestrator | + ceph mon stat 2026-03-23 01:15:34.088980 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-23 01:15:34.130676 | orchestrator | 2026-03-23 01:15:34.130763 | orchestrator | # Ceph quorum status 2026-03-23 01:15:34.130774 | orchestrator | 2026-03-23 01:15:34.130781 | orchestrator | + echo 2026-03-23 01:15:34.130788 | orchestrator | + echo '# Ceph quorum status' 2026-03-23 01:15:34.130795 | orchestrator | + echo 2026-03-23 01:15:34.131190 | orchestrator | + ceph quorum_status 2026-03-23 01:15:34.131294 | orchestrator | + jq 2026-03-23 01:15:34.751098 | orchestrator | { 2026-03-23 01:15:34.751187 | orchestrator | "election_epoch": 8, 2026-03-23 01:15:34.751267 | orchestrator | "quorum": [ 2026-03-23 01:15:34.751275 | orchestrator | 0, 2026-03-23 01:15:34.751280 | orchestrator | 1, 2026-03-23 01:15:34.751286 | orchestrator | 2 2026-03-23 01:15:34.751292 | orchestrator | ], 2026-03-23 01:15:34.751297 | orchestrator | "quorum_names": [ 2026-03-23 01:15:34.751304 | orchestrator | "testbed-node-0", 2026-03-23 01:15:34.751309 | orchestrator | "testbed-node-1", 2026-03-23 01:15:34.751315 | orchestrator | "testbed-node-2" 2026-03-23 01:15:34.751321 | orchestrator | ], 2026-03-23 01:15:34.751327 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-23 01:15:34.751335 | orchestrator | "quorum_age": 1501, 2026-03-23 01:15:34.751341 | orchestrator | "features": { 2026-03-23 01:15:34.751347 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-23 01:15:34.751353 | orchestrator | "quorum_mon": [ 2026-03-23 01:15:34.751359 | orchestrator | "kraken", 2026-03-23 01:15:34.751365 | orchestrator | "luminous", 2026-03-23 01:15:34.751371 | orchestrator | "mimic", 2026-03-23 01:15:34.751377 | orchestrator | "osdmap-prune", 2026-03-23 01:15:34.751384 | orchestrator | "nautilus", 2026-03-23 01:15:34.751434 | orchestrator | "octopus", 2026-03-23 01:15:34.751440 | orchestrator | "pacific", 2026-03-23 01:15:34.751446 | orchestrator | "elector-pinging", 2026-03-23 01:15:34.751452 | orchestrator | "quincy", 2026-03-23 01:15:34.751457 | orchestrator | "reef" 2026-03-23 01:15:34.751463 | orchestrator | ] 2026-03-23 01:15:34.751468 | orchestrator | }, 2026-03-23 01:15:34.751473 | orchestrator | "monmap": { 2026-03-23 01:15:34.751479 | orchestrator | "epoch": 1, 2026-03-23 01:15:34.751485 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-23 01:15:34.751491 | orchestrator | "modified": "2026-03-23T00:50:14.081460Z", 2026-03-23 01:15:34.751498 | orchestrator | "created": "2026-03-23T00:50:14.081460Z", 2026-03-23 01:15:34.751503 | orchestrator | "min_mon_release": 18, 2026-03-23 01:15:34.751509 | orchestrator | "min_mon_release_name": "reef", 2026-03-23 01:15:34.751515 | orchestrator | "election_strategy": 1, 2026-03-23 01:15:34.751520 | orchestrator | "disallowed_leaders": "", 2026-03-23 01:15:34.751526 | orchestrator | "stretch_mode": false, 2026-03-23 01:15:34.751532 | orchestrator | "tiebreaker_mon": "", 2026-03-23 01:15:34.751538 | orchestrator | "removed_ranks": "", 2026-03-23 01:15:34.751544 | orchestrator | "features": { 2026-03-23 01:15:34.751549 | orchestrator | "persistent": [ 2026-03-23 01:15:34.751555 | orchestrator | "kraken", 2026-03-23 01:15:34.751561 | orchestrator | "luminous", 2026-03-23 01:15:34.751567 | orchestrator | "mimic", 2026-03-23 01:15:34.751572 | orchestrator | "osdmap-prune", 2026-03-23 01:15:34.751602 | orchestrator | "nautilus", 2026-03-23 01:15:34.751608 | orchestrator | "octopus", 2026-03-23 01:15:34.751613 | orchestrator | "pacific", 2026-03-23 01:15:34.751619 | orchestrator | "elector-pinging", 2026-03-23 01:15:34.751624 | orchestrator | "quincy", 2026-03-23 01:15:34.751630 | orchestrator | "reef" 2026-03-23 01:15:34.751635 | orchestrator | ], 2026-03-23 01:15:34.751641 | orchestrator | "optional": [] 2026-03-23 01:15:34.751647 | orchestrator | }, 2026-03-23 01:15:34.751653 | orchestrator | "mons": [ 2026-03-23 01:15:34.751659 | orchestrator | { 2026-03-23 01:15:34.751665 | orchestrator | "rank": 0, 2026-03-23 01:15:34.751671 | orchestrator | "name": "testbed-node-0", 2026-03-23 01:15:34.751678 | orchestrator | "public_addrs": { 2026-03-23 01:15:34.751684 | orchestrator | "addrvec": [ 2026-03-23 01:15:34.751690 | orchestrator | { 2026-03-23 01:15:34.751696 | orchestrator | "type": "v2", 2026-03-23 01:15:34.751702 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-23 01:15:34.751708 | orchestrator | "nonce": 0 2026-03-23 01:15:34.751715 | orchestrator | }, 2026-03-23 01:15:34.751722 | orchestrator | { 2026-03-23 01:15:34.751728 | orchestrator | "type": "v1", 2026-03-23 01:15:34.751734 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-23 01:15:34.751741 | orchestrator | "nonce": 0 2026-03-23 01:15:34.751747 | orchestrator | } 2026-03-23 01:15:34.751754 | orchestrator | ] 2026-03-23 01:15:34.751759 | orchestrator | }, 2026-03-23 01:15:34.751765 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-23 01:15:34.751771 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-23 01:15:34.751777 | orchestrator | "priority": 0, 2026-03-23 01:15:34.751783 | orchestrator | "weight": 0, 2026-03-23 01:15:34.751789 | orchestrator | "crush_location": "{}" 2026-03-23 01:15:34.751795 | orchestrator | }, 2026-03-23 01:15:34.751801 | orchestrator | { 2026-03-23 01:15:34.751806 | orchestrator | "rank": 1, 2026-03-23 01:15:34.751812 | orchestrator | "name": "testbed-node-1", 2026-03-23 01:15:34.751819 | orchestrator | "public_addrs": { 2026-03-23 01:15:34.751825 | orchestrator | "addrvec": [ 2026-03-23 01:15:34.751831 | orchestrator | { 2026-03-23 01:15:34.751837 | orchestrator | "type": "v2", 2026-03-23 01:15:34.751843 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-23 01:15:34.751849 | orchestrator | "nonce": 0 2026-03-23 01:15:34.751855 | orchestrator | }, 2026-03-23 01:15:34.751861 | orchestrator | { 2026-03-23 01:15:34.751868 | orchestrator | "type": "v1", 2026-03-23 01:15:34.751873 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-23 01:15:34.751879 | orchestrator | "nonce": 0 2026-03-23 01:15:34.751884 | orchestrator | } 2026-03-23 01:15:34.751890 | orchestrator | ] 2026-03-23 01:15:34.751896 | orchestrator | }, 2026-03-23 01:15:34.751916 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-23 01:15:34.751923 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-23 01:15:34.751928 | orchestrator | "priority": 0, 2026-03-23 01:15:34.751934 | orchestrator | "weight": 0, 2026-03-23 01:15:34.751940 | orchestrator | "crush_location": "{}" 2026-03-23 01:15:34.751945 | orchestrator | }, 2026-03-23 01:15:34.751951 | orchestrator | { 2026-03-23 01:15:34.751956 | orchestrator | "rank": 2, 2026-03-23 01:15:34.751962 | orchestrator | "name": "testbed-node-2", 2026-03-23 01:15:34.751968 | orchestrator | "public_addrs": { 2026-03-23 01:15:34.751974 | orchestrator | "addrvec": [ 2026-03-23 01:15:34.751980 | orchestrator | { 2026-03-23 01:15:34.751986 | orchestrator | "type": "v2", 2026-03-23 01:15:34.751992 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-23 01:15:34.751998 | orchestrator | "nonce": 0 2026-03-23 01:15:34.752004 | orchestrator | }, 2026-03-23 01:15:34.752009 | orchestrator | { 2026-03-23 01:15:34.752015 | orchestrator | "type": "v1", 2026-03-23 01:15:34.752021 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-23 01:15:34.752027 | orchestrator | "nonce": 0 2026-03-23 01:15:34.752033 | orchestrator | } 2026-03-23 01:15:34.752039 | orchestrator | ] 2026-03-23 01:15:34.752044 | orchestrator | }, 2026-03-23 01:15:34.752051 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-23 01:15:34.752056 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-23 01:15:34.752062 | orchestrator | "priority": 0, 2026-03-23 01:15:34.752068 | orchestrator | "weight": 0, 2026-03-23 01:15:34.752074 | orchestrator | "crush_location": "{}" 2026-03-23 01:15:34.752085 | orchestrator | } 2026-03-23 01:15:34.752092 | orchestrator | ] 2026-03-23 01:15:34.752098 | orchestrator | } 2026-03-23 01:15:34.752104 | orchestrator | } 2026-03-23 01:15:34.752234 | orchestrator | 2026-03-23 01:15:34.752244 | orchestrator | # Ceph free space status 2026-03-23 01:15:34.752249 | orchestrator | 2026-03-23 01:15:34.752255 | orchestrator | + echo 2026-03-23 01:15:34.752261 | orchestrator | + echo '# Ceph free space status' 2026-03-23 01:15:34.752267 | orchestrator | + echo 2026-03-23 01:15:34.752273 | orchestrator | + ceph df 2026-03-23 01:15:35.342397 | orchestrator | --- RAW STORAGE --- 2026-03-23 01:15:35.342511 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-23 01:15:35.342529 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-03-23 01:15:35.342534 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-03-23 01:15:35.342538 | orchestrator | 2026-03-23 01:15:35.342543 | orchestrator | --- POOLS --- 2026-03-23 01:15:35.342548 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-23 01:15:35.342553 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-23 01:15:35.342558 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-23 01:15:35.343349 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-23 01:15:35.343404 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-23 01:15:35.343413 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-23 01:15:35.343420 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-23 01:15:35.343427 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-23 01:15:35.343435 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-23 01:15:35.343441 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-03-23 01:15:35.343449 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-23 01:15:35.343456 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-23 01:15:35.343463 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2026-03-23 01:15:35.343469 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-23 01:15:35.343475 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-23 01:15:35.385297 | orchestrator | ++ semver latest 5.0.0 2026-03-23 01:15:35.432929 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-23 01:15:35.433024 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-23 01:15:35.433036 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-03-23 01:15:35.433043 | orchestrator | + osism apply facts 2026-03-23 01:15:46.759373 | orchestrator | 2026-03-23 01:15:46 | INFO  | Prepare task for execution of facts. 2026-03-23 01:15:46.837091 | orchestrator | 2026-03-23 01:15:46 | INFO  | Task 89ddc191-f21f-44ad-bd4d-f17eecc68911 (facts) was prepared for execution. 2026-03-23 01:15:46.837201 | orchestrator | 2026-03-23 01:15:46 | INFO  | It takes a moment until task 89ddc191-f21f-44ad-bd4d-f17eecc68911 (facts) has been started and output is visible here. 2026-03-23 01:16:00.026513 | orchestrator | 2026-03-23 01:16:00.026611 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-23 01:16:00.026620 | orchestrator | 2026-03-23 01:16:00.026625 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-23 01:16:00.026629 | orchestrator | Monday 23 March 2026 01:15:50 +0000 (0:00:00.357) 0:00:00.357 ********** 2026-03-23 01:16:00.026633 | orchestrator | ok: [testbed-manager] 2026-03-23 01:16:00.026638 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:00.026643 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:16:00.026647 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:16:00.026651 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:16:00.027376 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:16:00.027418 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:16:00.027427 | orchestrator | 2026-03-23 01:16:00.027437 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-23 01:16:00.027476 | orchestrator | Monday 23 March 2026 01:15:51 +0000 (0:00:01.510) 0:00:01.867 ********** 2026-03-23 01:16:00.027498 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:16:00.027506 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:00.027511 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:16:00.027517 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:16:00.027523 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:16:00.027529 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:16:00.027535 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:16:00.027540 | orchestrator | 2026-03-23 01:16:00.027546 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-23 01:16:00.027552 | orchestrator | 2026-03-23 01:16:00.027558 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-23 01:16:00.027564 | orchestrator | Monday 23 March 2026 01:15:52 +0000 (0:00:01.316) 0:00:03.183 ********** 2026-03-23 01:16:00.027570 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:16:00.027576 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:16:00.027582 | orchestrator | ok: [testbed-manager] 2026-03-23 01:16:00.027587 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:00.027593 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:16:00.027599 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:16:00.027604 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:16:00.027610 | orchestrator | 2026-03-23 01:16:00.027615 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-23 01:16:00.027620 | orchestrator | 2026-03-23 01:16:00.027626 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-23 01:16:00.027631 | orchestrator | Monday 23 March 2026 01:15:59 +0000 (0:00:06.033) 0:00:09.217 ********** 2026-03-23 01:16:00.027636 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:16:00.027643 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:00.027648 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:16:00.027654 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:16:00.027659 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:16:00.027665 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:16:00.027671 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:16:00.027676 | orchestrator | 2026-03-23 01:16:00.027682 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:16:00.027688 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 01:16:00.027695 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 01:16:00.027701 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 01:16:00.027706 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 01:16:00.027711 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 01:16:00.027717 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 01:16:00.027722 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 01:16:00.027728 | orchestrator | 2026-03-23 01:16:00.027733 | orchestrator | 2026-03-23 01:16:00.027739 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:16:00.027745 | orchestrator | Monday 23 March 2026 01:15:59 +0000 (0:00:00.757) 0:00:09.975 ********** 2026-03-23 01:16:00.027750 | orchestrator | =============================================================================== 2026-03-23 01:16:00.027765 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.03s 2026-03-23 01:16:00.027771 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.51s 2026-03-23 01:16:00.027777 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2026-03-23 01:16:00.027783 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.76s 2026-03-23 01:16:00.226293 | orchestrator | + osism validate ceph-mons 2026-03-23 01:16:29.906885 | orchestrator | 2026-03-23 01:16:29.906940 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-23 01:16:29.906947 | orchestrator | 2026-03-23 01:16:29.906951 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-23 01:16:29.906956 | orchestrator | Monday 23 March 2026 01:16:15 +0000 (0:00:00.507) 0:00:00.507 ********** 2026-03-23 01:16:29.906960 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:16:29.906964 | orchestrator | 2026-03-23 01:16:29.906968 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-23 01:16:29.906971 | orchestrator | Monday 23 March 2026 01:16:16 +0000 (0:00:00.915) 0:00:01.423 ********** 2026-03-23 01:16:29.906983 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:16:29.906993 | orchestrator | 2026-03-23 01:16:29.907000 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-23 01:16:29.907007 | orchestrator | Monday 23 March 2026 01:16:16 +0000 (0:00:00.635) 0:00:02.058 ********** 2026-03-23 01:16:29.907014 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907021 | orchestrator | 2026-03-23 01:16:29.907026 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-23 01:16:29.907030 | orchestrator | Monday 23 March 2026 01:16:16 +0000 (0:00:00.118) 0:00:02.177 ********** 2026-03-23 01:16:29.907033 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907037 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:16:29.907041 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:16:29.907045 | orchestrator | 2026-03-23 01:16:29.907049 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-23 01:16:29.907053 | orchestrator | Monday 23 March 2026 01:16:17 +0000 (0:00:00.270) 0:00:02.447 ********** 2026-03-23 01:16:29.907058 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:16:29.907064 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:16:29.907070 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907077 | orchestrator | 2026-03-23 01:16:29.907083 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-23 01:16:29.907089 | orchestrator | Monday 23 March 2026 01:16:18 +0000 (0:00:01.521) 0:00:03.968 ********** 2026-03-23 01:16:29.907096 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:29.907103 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:16:29.907109 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:16:29.907115 | orchestrator | 2026-03-23 01:16:29.907119 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-23 01:16:29.907123 | orchestrator | Monday 23 March 2026 01:16:18 +0000 (0:00:00.287) 0:00:04.256 ********** 2026-03-23 01:16:29.907127 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907133 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:16:29.907138 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:16:29.907143 | orchestrator | 2026-03-23 01:16:29.907149 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-23 01:16:29.907154 | orchestrator | Monday 23 March 2026 01:16:19 +0000 (0:00:00.290) 0:00:04.546 ********** 2026-03-23 01:16:29.907161 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907171 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:16:29.907177 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:16:29.907182 | orchestrator | 2026-03-23 01:16:29.907188 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-23 01:16:29.907195 | orchestrator | Monday 23 March 2026 01:16:19 +0000 (0:00:00.266) 0:00:04.813 ********** 2026-03-23 01:16:29.907288 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:29.907298 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:16:29.907303 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:16:29.907307 | orchestrator | 2026-03-23 01:16:29.907312 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-23 01:16:29.907318 | orchestrator | Monday 23 March 2026 01:16:19 +0000 (0:00:00.351) 0:00:05.165 ********** 2026-03-23 01:16:29.907322 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907326 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:16:29.907330 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:16:29.907334 | orchestrator | 2026-03-23 01:16:29.907346 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-23 01:16:29.907352 | orchestrator | Monday 23 March 2026 01:16:20 +0000 (0:00:00.283) 0:00:05.448 ********** 2026-03-23 01:16:29.907359 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:29.907365 | orchestrator | 2026-03-23 01:16:29.907371 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-23 01:16:29.907377 | orchestrator | Monday 23 March 2026 01:16:20 +0000 (0:00:00.233) 0:00:05.682 ********** 2026-03-23 01:16:29.907384 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:29.907390 | orchestrator | 2026-03-23 01:16:29.907397 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-23 01:16:29.907403 | orchestrator | Monday 23 March 2026 01:16:20 +0000 (0:00:00.246) 0:00:05.928 ********** 2026-03-23 01:16:29.907410 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:29.907416 | orchestrator | 2026-03-23 01:16:29.907422 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:16:29.907429 | orchestrator | Monday 23 March 2026 01:16:20 +0000 (0:00:00.219) 0:00:06.148 ********** 2026-03-23 01:16:29.907436 | orchestrator | 2026-03-23 01:16:29.907442 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:16:29.907449 | orchestrator | Monday 23 March 2026 01:16:20 +0000 (0:00:00.064) 0:00:06.212 ********** 2026-03-23 01:16:29.907454 | orchestrator | 2026-03-23 01:16:29.907458 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:16:29.907462 | orchestrator | Monday 23 March 2026 01:16:20 +0000 (0:00:00.064) 0:00:06.277 ********** 2026-03-23 01:16:29.907466 | orchestrator | 2026-03-23 01:16:29.907470 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-23 01:16:29.907474 | orchestrator | Monday 23 March 2026 01:16:21 +0000 (0:00:00.160) 0:00:06.437 ********** 2026-03-23 01:16:29.907478 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:29.907483 | orchestrator | 2026-03-23 01:16:29.907487 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-23 01:16:29.907492 | orchestrator | Monday 23 March 2026 01:16:21 +0000 (0:00:00.223) 0:00:06.660 ********** 2026-03-23 01:16:29.907496 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:29.907500 | orchestrator | 2026-03-23 01:16:29.907513 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-23 01:16:29.907518 | orchestrator | Monday 23 March 2026 01:16:21 +0000 (0:00:00.233) 0:00:06.893 ********** 2026-03-23 01:16:29.907522 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907526 | orchestrator | 2026-03-23 01:16:29.907531 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-23 01:16:29.907535 | orchestrator | Monday 23 March 2026 01:16:21 +0000 (0:00:00.108) 0:00:07.002 ********** 2026-03-23 01:16:29.907539 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:16:29.907543 | orchestrator | 2026-03-23 01:16:29.907548 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-23 01:16:29.907552 | orchestrator | Monday 23 March 2026 01:16:23 +0000 (0:00:01.501) 0:00:08.503 ********** 2026-03-23 01:16:29.907557 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907561 | orchestrator | 2026-03-23 01:16:29.907565 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-23 01:16:29.907570 | orchestrator | Monday 23 March 2026 01:16:23 +0000 (0:00:00.313) 0:00:08.817 ********** 2026-03-23 01:16:29.907580 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:29.907584 | orchestrator | 2026-03-23 01:16:29.907588 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-23 01:16:29.907592 | orchestrator | Monday 23 March 2026 01:16:23 +0000 (0:00:00.121) 0:00:08.939 ********** 2026-03-23 01:16:29.907597 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907601 | orchestrator | 2026-03-23 01:16:29.907605 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-23 01:16:29.907612 | orchestrator | Monday 23 March 2026 01:16:23 +0000 (0:00:00.324) 0:00:09.263 ********** 2026-03-23 01:16:29.907617 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907621 | orchestrator | 2026-03-23 01:16:29.907626 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-23 01:16:29.907630 | orchestrator | Monday 23 March 2026 01:16:24 +0000 (0:00:00.322) 0:00:09.586 ********** 2026-03-23 01:16:29.907634 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:29.907638 | orchestrator | 2026-03-23 01:16:29.907643 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-23 01:16:29.907647 | orchestrator | Monday 23 March 2026 01:16:24 +0000 (0:00:00.105) 0:00:09.691 ********** 2026-03-23 01:16:29.907651 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907656 | orchestrator | 2026-03-23 01:16:29.907660 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-23 01:16:29.907664 | orchestrator | Monday 23 March 2026 01:16:24 +0000 (0:00:00.137) 0:00:09.829 ********** 2026-03-23 01:16:29.907669 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907674 | orchestrator | 2026-03-23 01:16:29.907681 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-23 01:16:29.907685 | orchestrator | Monday 23 March 2026 01:16:24 +0000 (0:00:00.256) 0:00:10.086 ********** 2026-03-23 01:16:29.907690 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:16:29.907694 | orchestrator | 2026-03-23 01:16:29.907698 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-23 01:16:29.907703 | orchestrator | Monday 23 March 2026 01:16:25 +0000 (0:00:01.233) 0:00:11.319 ********** 2026-03-23 01:16:29.907707 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907711 | orchestrator | 2026-03-23 01:16:29.907716 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-23 01:16:29.907720 | orchestrator | Monday 23 March 2026 01:16:26 +0000 (0:00:00.283) 0:00:11.602 ********** 2026-03-23 01:16:29.907724 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:29.907729 | orchestrator | 2026-03-23 01:16:29.907733 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-23 01:16:29.907737 | orchestrator | Monday 23 March 2026 01:16:26 +0000 (0:00:00.155) 0:00:11.758 ********** 2026-03-23 01:16:29.907742 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:16:29.907746 | orchestrator | 2026-03-23 01:16:29.907751 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-23 01:16:29.907757 | orchestrator | Monday 23 March 2026 01:16:26 +0000 (0:00:00.140) 0:00:11.898 ********** 2026-03-23 01:16:29.907767 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:29.907774 | orchestrator | 2026-03-23 01:16:29.907780 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-23 01:16:29.907786 | orchestrator | Monday 23 March 2026 01:16:26 +0000 (0:00:00.147) 0:00:12.046 ********** 2026-03-23 01:16:29.907793 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:29.907800 | orchestrator | 2026-03-23 01:16:29.907807 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-23 01:16:29.907813 | orchestrator | Monday 23 March 2026 01:16:26 +0000 (0:00:00.151) 0:00:12.198 ********** 2026-03-23 01:16:29.907818 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:16:29.907822 | orchestrator | 2026-03-23 01:16:29.907827 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-23 01:16:29.907831 | orchestrator | Monday 23 March 2026 01:16:27 +0000 (0:00:00.253) 0:00:12.451 ********** 2026-03-23 01:16:29.907839 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:16:29.907845 | orchestrator | 2026-03-23 01:16:29.907850 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-23 01:16:29.907854 | orchestrator | Monday 23 March 2026 01:16:27 +0000 (0:00:00.241) 0:00:12.693 ********** 2026-03-23 01:16:29.907860 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:16:29.907866 | orchestrator | 2026-03-23 01:16:29.907871 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-23 01:16:29.907875 | orchestrator | Monday 23 March 2026 01:16:29 +0000 (0:00:01.716) 0:00:14.410 ********** 2026-03-23 01:16:29.907880 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:16:29.907884 | orchestrator | 2026-03-23 01:16:29.907888 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-23 01:16:29.907893 | orchestrator | Monday 23 March 2026 01:16:29 +0000 (0:00:00.248) 0:00:14.658 ********** 2026-03-23 01:16:29.907897 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:16:29.907901 | orchestrator | 2026-03-23 01:16:29.907909 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:16:32.118507 | orchestrator | Monday 23 March 2026 01:16:29 +0000 (0:00:00.585) 0:00:15.243 ********** 2026-03-23 01:16:32.118563 | orchestrator | 2026-03-23 01:16:32.118571 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:16:32.118578 | orchestrator | Monday 23 March 2026 01:16:29 +0000 (0:00:00.068) 0:00:15.312 ********** 2026-03-23 01:16:32.118585 | orchestrator | 2026-03-23 01:16:32.118591 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:16:32.118597 | orchestrator | Monday 23 March 2026 01:16:30 +0000 (0:00:00.094) 0:00:15.406 ********** 2026-03-23 01:16:32.118604 | orchestrator | 2026-03-23 01:16:32.118610 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-23 01:16:32.118616 | orchestrator | Monday 23 March 2026 01:16:30 +0000 (0:00:00.074) 0:00:15.480 ********** 2026-03-23 01:16:32.118622 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:16:32.118629 | orchestrator | 2026-03-23 01:16:32.118635 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-23 01:16:32.118641 | orchestrator | Monday 23 March 2026 01:16:31 +0000 (0:00:01.279) 0:00:16.760 ********** 2026-03-23 01:16:32.118648 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-23 01:16:32.118654 | orchestrator |  "msg": [ 2026-03-23 01:16:32.118661 | orchestrator |  "Validator run completed.", 2026-03-23 01:16:32.118668 | orchestrator |  "You can find the report file here:", 2026-03-23 01:16:32.118674 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-23T01:16:15+00:00-report.json", 2026-03-23 01:16:32.118681 | orchestrator |  "on the following host:", 2026-03-23 01:16:32.118688 | orchestrator |  "testbed-manager" 2026-03-23 01:16:32.118695 | orchestrator |  ] 2026-03-23 01:16:32.118699 | orchestrator | } 2026-03-23 01:16:32.118704 | orchestrator | 2026-03-23 01:16:32.118710 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:16:32.118717 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-23 01:16:32.118725 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 01:16:32.118732 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 01:16:32.118739 | orchestrator | 2026-03-23 01:16:32.118746 | orchestrator | 2026-03-23 01:16:32.118752 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:16:32.118758 | orchestrator | Monday 23 March 2026 01:16:31 +0000 (0:00:00.404) 0:00:17.165 ********** 2026-03-23 01:16:32.118774 | orchestrator | =============================================================================== 2026-03-23 01:16:32.118779 | orchestrator | Aggregate test results step one ----------------------------------------- 1.72s 2026-03-23 01:16:32.118782 | orchestrator | Get container info ------------------------------------------------------ 1.52s 2026-03-23 01:16:32.118786 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.50s 2026-03-23 01:16:32.118790 | orchestrator | Write report file ------------------------------------------------------- 1.28s 2026-03-23 01:16:32.118794 | orchestrator | Gather status data ------------------------------------------------------ 1.23s 2026-03-23 01:16:32.118797 | orchestrator | Get timestamp for report file ------------------------------------------- 0.92s 2026-03-23 01:16:32.118801 | orchestrator | Create report output directory ------------------------------------------ 0.64s 2026-03-23 01:16:32.118805 | orchestrator | Aggregate test results step three --------------------------------------- 0.59s 2026-03-23 01:16:32.118808 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-03-23 01:16:32.118812 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.35s 2026-03-23 01:16:32.118816 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2026-03-23 01:16:32.118819 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2026-03-23 01:16:32.118823 | orchestrator | Set quorum test data ---------------------------------------------------- 0.31s 2026-03-23 01:16:32.118827 | orchestrator | Set test result to passed if container is existing ---------------------- 0.29s 2026-03-23 01:16:32.118830 | orchestrator | Flush handlers ---------------------------------------------------------- 0.29s 2026-03-23 01:16:32.118834 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-03-23 01:16:32.118838 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.28s 2026-03-23 01:16:32.118841 | orchestrator | Set health test data ---------------------------------------------------- 0.28s 2026-03-23 01:16:32.118845 | orchestrator | Prepare test data for container existance test -------------------------- 0.27s 2026-03-23 01:16:32.118849 | orchestrator | Prepare test data ------------------------------------------------------- 0.27s 2026-03-23 01:16:32.296471 | orchestrator | + osism validate ceph-mgrs 2026-03-23 01:17:00.950275 | orchestrator | 2026-03-23 01:17:00.950343 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-23 01:17:00.950354 | orchestrator | 2026-03-23 01:17:00.950361 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-23 01:17:00.950369 | orchestrator | Monday 23 March 2026 01:16:47 +0000 (0:00:00.510) 0:00:00.510 ********** 2026-03-23 01:17:00.950376 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:00.950383 | orchestrator | 2026-03-23 01:17:00.950390 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-23 01:17:00.950462 | orchestrator | Monday 23 March 2026 01:16:48 +0000 (0:00:00.958) 0:00:01.469 ********** 2026-03-23 01:17:00.950475 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:00.950482 | orchestrator | 2026-03-23 01:17:00.950489 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-23 01:17:00.950497 | orchestrator | Monday 23 March 2026 01:16:48 +0000 (0:00:00.678) 0:00:02.148 ********** 2026-03-23 01:17:00.950509 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:17:00.950526 | orchestrator | 2026-03-23 01:17:00.950538 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-23 01:17:00.950562 | orchestrator | Monday 23 March 2026 01:16:48 +0000 (0:00:00.120) 0:00:02.268 ********** 2026-03-23 01:17:00.950574 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:17:00.950585 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:17:00.950595 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:17:00.950607 | orchestrator | 2026-03-23 01:17:00.950619 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-23 01:17:00.950652 | orchestrator | Monday 23 March 2026 01:16:49 +0000 (0:00:00.276) 0:00:02.544 ********** 2026-03-23 01:17:00.950664 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:17:00.950676 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:17:00.950686 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:17:00.950696 | orchestrator | 2026-03-23 01:17:00.950708 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-23 01:17:00.950718 | orchestrator | Monday 23 March 2026 01:16:50 +0000 (0:00:01.353) 0:00:03.898 ********** 2026-03-23 01:17:00.950730 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:17:00.950742 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:17:00.950758 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:17:00.950770 | orchestrator | 2026-03-23 01:17:00.950782 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-23 01:17:00.950794 | orchestrator | Monday 23 March 2026 01:16:50 +0000 (0:00:00.287) 0:00:04.186 ********** 2026-03-23 01:17:00.950806 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:17:00.950818 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:17:00.950830 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:17:00.950841 | orchestrator | 2026-03-23 01:17:00.950852 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-23 01:17:00.950865 | orchestrator | Monday 23 March 2026 01:16:51 +0000 (0:00:00.323) 0:00:04.510 ********** 2026-03-23 01:17:00.950876 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:17:00.950887 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:17:00.950899 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:17:00.950910 | orchestrator | 2026-03-23 01:17:00.950922 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-23 01:17:00.950934 | orchestrator | Monday 23 March 2026 01:16:51 +0000 (0:00:00.289) 0:00:04.799 ********** 2026-03-23 01:17:00.950946 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:17:00.950959 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:17:00.950971 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:17:00.950983 | orchestrator | 2026-03-23 01:17:00.950994 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-23 01:17:00.951006 | orchestrator | Monday 23 March 2026 01:16:51 +0000 (0:00:00.445) 0:00:05.245 ********** 2026-03-23 01:17:00.951018 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:17:00.951029 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:17:00.951041 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:17:00.951052 | orchestrator | 2026-03-23 01:17:00.951063 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-23 01:17:00.951075 | orchestrator | Monday 23 March 2026 01:16:52 +0000 (0:00:00.297) 0:00:05.542 ********** 2026-03-23 01:17:00.951087 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:17:00.951099 | orchestrator | 2026-03-23 01:17:00.951111 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-23 01:17:00.951120 | orchestrator | Monday 23 March 2026 01:16:52 +0000 (0:00:00.230) 0:00:05.773 ********** 2026-03-23 01:17:00.951129 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:17:00.951137 | orchestrator | 2026-03-23 01:17:00.951144 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-23 01:17:00.951152 | orchestrator | Monday 23 March 2026 01:16:52 +0000 (0:00:00.246) 0:00:06.019 ********** 2026-03-23 01:17:00.951160 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:17:00.951168 | orchestrator | 2026-03-23 01:17:00.951176 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:17:00.951184 | orchestrator | Monday 23 March 2026 01:16:52 +0000 (0:00:00.257) 0:00:06.277 ********** 2026-03-23 01:17:00.951192 | orchestrator | 2026-03-23 01:17:00.951200 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:17:00.951207 | orchestrator | Monday 23 March 2026 01:16:53 +0000 (0:00:00.079) 0:00:06.357 ********** 2026-03-23 01:17:00.951215 | orchestrator | 2026-03-23 01:17:00.951223 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:17:00.951238 | orchestrator | Monday 23 March 2026 01:16:53 +0000 (0:00:00.081) 0:00:06.439 ********** 2026-03-23 01:17:00.951246 | orchestrator | 2026-03-23 01:17:00.951253 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-23 01:17:00.951261 | orchestrator | Monday 23 March 2026 01:16:53 +0000 (0:00:00.224) 0:00:06.663 ********** 2026-03-23 01:17:00.951269 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:17:00.951276 | orchestrator | 2026-03-23 01:17:00.951284 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-23 01:17:00.951292 | orchestrator | Monday 23 March 2026 01:16:53 +0000 (0:00:00.262) 0:00:06.926 ********** 2026-03-23 01:17:00.951300 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:17:00.951307 | orchestrator | 2026-03-23 01:17:00.951330 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-23 01:17:00.951338 | orchestrator | Monday 23 March 2026 01:16:53 +0000 (0:00:00.241) 0:00:07.167 ********** 2026-03-23 01:17:00.951346 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:17:00.951354 | orchestrator | 2026-03-23 01:17:00.951361 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-23 01:17:00.951369 | orchestrator | Monday 23 March 2026 01:16:54 +0000 (0:00:00.150) 0:00:07.318 ********** 2026-03-23 01:17:00.951377 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:17:00.951385 | orchestrator | 2026-03-23 01:17:00.951412 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-23 01:17:00.951420 | orchestrator | Monday 23 March 2026 01:16:55 +0000 (0:00:01.694) 0:00:09.013 ********** 2026-03-23 01:17:00.951428 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:17:00.951436 | orchestrator | 2026-03-23 01:17:00.951444 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-23 01:17:00.951452 | orchestrator | Monday 23 March 2026 01:16:55 +0000 (0:00:00.239) 0:00:09.252 ********** 2026-03-23 01:17:00.951460 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:17:00.951468 | orchestrator | 2026-03-23 01:17:00.951475 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-23 01:17:00.951483 | orchestrator | Monday 23 March 2026 01:16:56 +0000 (0:00:00.287) 0:00:09.540 ********** 2026-03-23 01:17:00.951491 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:17:00.951499 | orchestrator | 2026-03-23 01:17:00.951507 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-23 01:17:00.951514 | orchestrator | Monday 23 March 2026 01:16:56 +0000 (0:00:00.139) 0:00:09.679 ********** 2026-03-23 01:17:00.951522 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:17:00.951530 | orchestrator | 2026-03-23 01:17:00.951538 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-23 01:17:00.951546 | orchestrator | Monday 23 March 2026 01:16:56 +0000 (0:00:00.144) 0:00:09.824 ********** 2026-03-23 01:17:00.951553 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:00.951561 | orchestrator | 2026-03-23 01:17:00.951569 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-23 01:17:00.951581 | orchestrator | Monday 23 March 2026 01:16:56 +0000 (0:00:00.241) 0:00:10.066 ********** 2026-03-23 01:17:00.951589 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:17:00.951596 | orchestrator | 2026-03-23 01:17:00.951604 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-23 01:17:00.951612 | orchestrator | Monday 23 March 2026 01:16:57 +0000 (0:00:00.241) 0:00:10.307 ********** 2026-03-23 01:17:00.951620 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:00.951627 | orchestrator | 2026-03-23 01:17:00.951635 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-23 01:17:00.951642 | orchestrator | Monday 23 March 2026 01:16:58 +0000 (0:00:01.482) 0:00:11.789 ********** 2026-03-23 01:17:00.951650 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:00.951658 | orchestrator | 2026-03-23 01:17:00.951666 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-23 01:17:00.951678 | orchestrator | Monday 23 March 2026 01:16:58 +0000 (0:00:00.260) 0:00:12.049 ********** 2026-03-23 01:17:00.951686 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:00.951693 | orchestrator | 2026-03-23 01:17:00.951701 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:17:00.951709 | orchestrator | Monday 23 March 2026 01:16:59 +0000 (0:00:00.278) 0:00:12.328 ********** 2026-03-23 01:17:00.951716 | orchestrator | 2026-03-23 01:17:00.951724 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:17:00.951731 | orchestrator | Monday 23 March 2026 01:16:59 +0000 (0:00:00.069) 0:00:12.398 ********** 2026-03-23 01:17:00.951739 | orchestrator | 2026-03-23 01:17:00.951747 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:17:00.951754 | orchestrator | Monday 23 March 2026 01:16:59 +0000 (0:00:00.099) 0:00:12.497 ********** 2026-03-23 01:17:00.951762 | orchestrator | 2026-03-23 01:17:00.951770 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-23 01:17:00.951777 | orchestrator | Monday 23 March 2026 01:16:59 +0000 (0:00:00.077) 0:00:12.574 ********** 2026-03-23 01:17:00.951785 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:00.951792 | orchestrator | 2026-03-23 01:17:00.951800 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-23 01:17:00.951807 | orchestrator | Monday 23 March 2026 01:17:00 +0000 (0:00:01.244) 0:00:13.819 ********** 2026-03-23 01:17:00.951815 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-23 01:17:00.951823 | orchestrator |  "msg": [ 2026-03-23 01:17:00.951831 | orchestrator |  "Validator run completed.", 2026-03-23 01:17:00.951839 | orchestrator |  "You can find the report file here:", 2026-03-23 01:17:00.951847 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-23T01:16:48+00:00-report.json", 2026-03-23 01:17:00.951856 | orchestrator |  "on the following host:", 2026-03-23 01:17:00.951864 | orchestrator |  "testbed-manager" 2026-03-23 01:17:00.951872 | orchestrator |  ] 2026-03-23 01:17:00.951880 | orchestrator | } 2026-03-23 01:17:00.951888 | orchestrator | 2026-03-23 01:17:00.951896 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:17:00.951904 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-23 01:17:00.951912 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 01:17:00.951929 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 01:17:01.261784 | orchestrator | 2026-03-23 01:17:01.261846 | orchestrator | 2026-03-23 01:17:01.261854 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:17:01.261861 | orchestrator | Monday 23 March 2026 01:17:00 +0000 (0:00:00.406) 0:00:14.225 ********** 2026-03-23 01:17:01.261866 | orchestrator | =============================================================================== 2026-03-23 01:17:01.261872 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.69s 2026-03-23 01:17:01.261877 | orchestrator | Aggregate test results step one ----------------------------------------- 1.48s 2026-03-23 01:17:01.261882 | orchestrator | Get container info ------------------------------------------------------ 1.35s 2026-03-23 01:17:01.261887 | orchestrator | Write report file ------------------------------------------------------- 1.24s 2026-03-23 01:17:01.261892 | orchestrator | Get timestamp for report file ------------------------------------------- 0.96s 2026-03-23 01:17:01.261897 | orchestrator | Create report output directory ------------------------------------------ 0.68s 2026-03-23 01:17:01.261902 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.45s 2026-03-23 01:17:01.261921 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-03-23 01:17:01.261927 | orchestrator | Flush handlers ---------------------------------------------------------- 0.39s 2026-03-23 01:17:01.261935 | orchestrator | Set test result to passed if container is existing ---------------------- 0.32s 2026-03-23 01:17:01.261945 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.30s 2026-03-23 01:17:01.261958 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-03-23 01:17:01.261966 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-03-23 01:17:01.261975 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.29s 2026-03-23 01:17:01.261984 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-03-23 01:17:01.261993 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-03-23 01:17:01.262001 | orchestrator | Print report file information ------------------------------------------- 0.26s 2026-03-23 01:17:01.262010 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-03-23 01:17:01.262059 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-03-23 01:17:01.262068 | orchestrator | Flush handlers ---------------------------------------------------------- 0.25s 2026-03-23 01:17:01.428524 | orchestrator | + osism validate ceph-osds 2026-03-23 01:17:20.411668 | orchestrator | 2026-03-23 01:17:20.411725 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-23 01:17:20.411732 | orchestrator | 2026-03-23 01:17:20.411737 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-23 01:17:20.411741 | orchestrator | Monday 23 March 2026 01:17:16 +0000 (0:00:00.505) 0:00:00.505 ********** 2026-03-23 01:17:20.411745 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:20.411750 | orchestrator | 2026-03-23 01:17:20.411753 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-23 01:17:20.411757 | orchestrator | Monday 23 March 2026 01:17:17 +0000 (0:00:01.028) 0:00:01.534 ********** 2026-03-23 01:17:20.411761 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:20.411765 | orchestrator | 2026-03-23 01:17:20.411769 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-23 01:17:20.411773 | orchestrator | Monday 23 March 2026 01:17:17 +0000 (0:00:00.242) 0:00:01.777 ********** 2026-03-23 01:17:20.411776 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:20.411780 | orchestrator | 2026-03-23 01:17:20.411784 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-23 01:17:20.411788 | orchestrator | Monday 23 March 2026 01:17:18 +0000 (0:00:00.710) 0:00:02.488 ********** 2026-03-23 01:17:20.411791 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:20.411796 | orchestrator | 2026-03-23 01:17:20.411799 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-23 01:17:20.411803 | orchestrator | Monday 23 March 2026 01:17:18 +0000 (0:00:00.133) 0:00:02.622 ********** 2026-03-23 01:17:20.411807 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:20.411811 | orchestrator | 2026-03-23 01:17:20.411815 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-23 01:17:20.411818 | orchestrator | Monday 23 March 2026 01:17:18 +0000 (0:00:00.162) 0:00:02.784 ********** 2026-03-23 01:17:20.411822 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:20.411826 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:17:20.411830 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:17:20.411833 | orchestrator | 2026-03-23 01:17:20.411837 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-23 01:17:20.411841 | orchestrator | Monday 23 March 2026 01:17:19 +0000 (0:00:00.432) 0:00:03.216 ********** 2026-03-23 01:17:20.411845 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:20.411848 | orchestrator | 2026-03-23 01:17:20.411864 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-23 01:17:20.411868 | orchestrator | Monday 23 March 2026 01:17:19 +0000 (0:00:00.156) 0:00:03.373 ********** 2026-03-23 01:17:20.411871 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:20.411875 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:20.411879 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:20.411883 | orchestrator | 2026-03-23 01:17:20.411886 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-23 01:17:20.411890 | orchestrator | Monday 23 March 2026 01:17:19 +0000 (0:00:00.321) 0:00:03.694 ********** 2026-03-23 01:17:20.411894 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:20.411897 | orchestrator | 2026-03-23 01:17:20.411909 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-23 01:17:20.411913 | orchestrator | Monday 23 March 2026 01:17:19 +0000 (0:00:00.354) 0:00:04.049 ********** 2026-03-23 01:17:20.411917 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:20.411921 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:20.411925 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:20.411928 | orchestrator | 2026-03-23 01:17:20.411932 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-23 01:17:20.411936 | orchestrator | Monday 23 March 2026 01:17:20 +0000 (0:00:00.282) 0:00:04.331 ********** 2026-03-23 01:17:20.411941 | orchestrator | skipping: [testbed-node-3] => (item={'id': '10e7fa8e4b61f8ac1251d86c0a02a174f3ed0f91e6b768ea9ab7b05409238934', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-23 01:17:20.411946 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'de4f3bcfb917d1b4c8bd06049fbd67bddf99699f11da13b583c3bd9e685f0aaf', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-23 01:17:20.411951 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd7c05f8de2a564b5805b11d2c764bfb45f88fa76f1556a35c71de76b6236eb16', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-23 01:17:20.411956 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aeb4592f80808cfa28941c90e6be972e0f62db342a858f2db1f4def73a8e6158', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-23 01:17:20.411967 | orchestrator | skipping: [testbed-node-3] => (item={'id': '94be3e0fd0d284c53eb9c2d3e3fde7082b25a2eaac5c1fbf74b02221aca2eddd', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-23 01:17:20.411983 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8575cd0336dbe1d6829b44a14e616101f4b0ae5ac83ff5446faa4c640b726b9e', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-23 01:17:20.411989 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3ca605d1c0fea0720f2b5259b422d6073d605861a181fa1c81eaa5bb6b790032', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-23 01:17:20.411993 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1e5d5043a394f79e3b32e4b6609752221be92dd24726d3617402233710be21d8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 19 minutes'})  2026-03-23 01:17:20.411997 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cefcf03abb388c89d5aebc5e02e2d11e94b2407ed8c5385818a98327bc54ba36', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 20 minutes'})  2026-03-23 01:17:20.412004 | orchestrator | skipping: [testbed-node-3] => (item={'id': '52f65970af500ab3890de591c64f34e147c3e83cfd4cfc22797ead5f3cd62378', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2026-03-23 01:17:20.412008 | orchestrator | ok: [testbed-node-3] => (item={'id': 'af5c557e7ecaf01d01c1e820cd2d69828577c2940f0c01035a819bad6d2d5050', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-03-23 01:17:20.412012 | orchestrator | ok: [testbed-node-3] => (item={'id': '184ad8f5d0026d0744c81c6bfc455ec8db91ca15e87b6097de43a61e66989f71', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-03-23 01:17:20.412016 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f9581b1ec9e72873c94b3a4c6b4b11dfb4d8867da9b8dfcb5d7fd5544044b463', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-23 01:17:20.412020 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4f5ed8bfa78cf932eaa296fd17b3cf3d60463a8c6f847cb8acf80265b91f82b8', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 26 minutes (healthy)'})  2026-03-23 01:17:20.412023 | orchestrator | skipping: [testbed-node-3] => (item={'id': '49e0150675b9f416bbc8dcd987580a4535574020bbe130e5c8ed19a555b99abe', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-03-23 01:17:20.412027 | orchestrator | skipping: [testbed-node-3] => (item={'id': '411fd4fcdd9237ee7bae4c74f4e9db9031916046bae064702f058d3b6ed3e51a', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 27 minutes'})  2026-03-23 01:17:20.412031 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bb0ebec375600534faeafae9dda8f8e7d86dab7cae2b37833bdcce683f73cb93', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 27 minutes'})  2026-03-23 01:17:20.412035 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3fa8e5c9ebf826cb1f769b619a51e02c5de5eacdf1c83dfdd9a59c3e95e19d93', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-23 01:17:20.412039 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5446a6db5166af08f9debc439c04fa507d3a8660677bac53cd23f779669619c1', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-23 01:17:20.412043 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b84021c089f8e30c01982dd2dc8dde51c1e7fd70008b0cc746cc52fe88ce55fe', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-23 01:17:20.412049 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fd3d645f91a20a1a8e9ef1ed9943d040f4c6ce06a2bd80e44cb8b02871ea8c30', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-23 01:17:20.412056 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e92f86ae7c46e8960d48d1a38312d3590fdbf25764b9929b434888fdddb379cb', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-23 01:17:20.532061 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f13ad54bc86b8f9737488695ee2227a321b355355cd3085960e425936e0731c0', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-23 01:17:20.532106 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2a1ce917b394b8deccf0f483314c31c8c4af874a6c629a4421fda37da1f2db04', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-23 01:17:20.532124 | orchestrator | skipping: [testbed-node-4] => (item={'id': '761b4b92bbef001d808e67deada3e3d8b7bd37f34fdf891245540d76fe1844e8', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-23 01:17:20.532129 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ad63400f75fc18a061b00caeb40745aca81b8abf0047a23cab54642652726a33', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 19 minutes'})  2026-03-23 01:17:20.532133 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c05dce6be7df141a0947d7f902687c23cd2e65f1cf0edd3a77f7a17fe58dbd20', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 20 minutes'})  2026-03-23 01:17:20.532137 | orchestrator | skipping: [testbed-node-4] => (item={'id': '17d6d47aeb5aa53c884a44bd37b215ad95a54a9cbe0dacca0f862c0bab3fb56d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2026-03-23 01:17:20.532142 | orchestrator | ok: [testbed-node-4] => (item={'id': 'e84b9f21c76e8d275baa23facc7f1c5f0c6d1dd89f223f22a372b05ce8a70f3e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-03-23 01:17:20.532146 | orchestrator | ok: [testbed-node-4] => (item={'id': '2001d2a0e484735e2c22029983640a5b179e0ccdf838919cd1768c916d9d36a2', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-03-23 01:17:20.532150 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5be8a9fa4133c9363c6985d57181944ecd78b3a11c99d476912f426cf6a052c8', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-23 01:17:20.532154 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ae51a90650022118d7b0895a751c4e519a9d1e40c4c1703f4adcf939fd942ac1', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 26 minutes (healthy)'})  2026-03-23 01:17:20.532160 | orchestrator | skipping: [testbed-node-4] => (item={'id': '055004a68fecd6fe68a49ec77a7741dcd33eda7071dd1784a317f9daf8567c5f', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-03-23 01:17:20.532167 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7fc23f3829fc327ebb7da93c93740ed41ce6a27c91669ffc38c31b7214538307', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 27 minutes'})  2026-03-23 01:17:20.532173 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8cfe572072407c1400e5cc1ed8acef270b1374700ceb0f15876444194c390785', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 27 minutes'})  2026-03-23 01:17:20.532180 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f0f16a9de0a1aaf97ed59fc44886508c0d701903e70e749f15f18ebbc4cdd308', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-23 01:17:20.532186 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ad642ef2efe9d55c5643327cc99bfa87f7465b8a6e246d4e36a07cab385b7f48', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-23 01:17:20.532202 | orchestrator | skipping: [testbed-node-5] => (item={'id': '236d88ac7a4ede94b8cf4cdf3594dc6ef98ec7eb9f3d6f0bef77e65fc6a64bdd', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-23 01:17:20.532213 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8ec18a504e38a57fa2889b625a99b199107e94308142ad5beb0ed6d232de33ef', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-23 01:17:20.532219 | orchestrator | skipping: [testbed-node-5] => (item={'id': '616b6a3dc697a5e0abeaaead2acac987cf9451947b174667b6d6876ae0b150f9', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-23 01:17:20.532226 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f2bfad448afb723883db3c2dc9616c8b437f2d88573a90ebd0cf6f9c61a7198c', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-23 01:17:20.532232 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5e13862adbc223a13ce9cda6559840c481c4668ed3677a07c2d6e3d74cee9776', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-23 01:17:20.532238 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f124af985653543e0d0abd49eabefa5639ca04c9fd118198aae006490351afd8', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-23 01:17:20.532244 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f696ff670d267459a31c55cd9f09c0e6c4abff2f88a06dca826c8700928b675b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 19 minutes'})  2026-03-23 01:17:20.532250 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3cb2a1e324aad12a50b3be3ce3f6bdc4abf1208e8498b0790052a53b6417e2b7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 20 minutes'})  2026-03-23 01:17:20.532257 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b7a057c7e5ff3c46d0de8a09665402fa61d36cf48fb6b4892f44d1e907110377', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2026-03-23 01:17:20.532274 | orchestrator | ok: [testbed-node-5] => (item={'id': '6155576036b6b4417fe9a5eeff5cf795dcdb28d8272139cd0d8ce9a403168ab4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-03-23 01:17:20.532280 | orchestrator | ok: [testbed-node-5] => (item={'id': '579fbc233a2519d495a2ed6c7f94baad927626de3d1f19d9150d507213e68273', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-03-23 01:17:20.532284 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e4fba904b9b61820293a6c81ec12c5920a90deb47c46709effba3a6657b682ac', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-23 01:17:20.532288 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd7d6456667b5db262fb6780ffe4f0bed2a113e79099918e3a1f8ff2cafceb203', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 26 minutes (healthy)'})  2026-03-23 01:17:20.532292 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8f3a3626374351cc9cf5ce6b38a629771d046d931883b1f61886c0b136463894', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-03-23 01:17:20.532297 | orchestrator | skipping: [testbed-node-5] => (item={'id': '93f34c85102ca252c5752cfeec1fe44ab65466d32868259545f7d4480c503c5a', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 27 minutes'})  2026-03-23 01:17:20.532304 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9cf9907cb6ecad8c423b9b3a35891689fce0d7b899026f10d712725e90200602', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 27 minutes'})  2026-03-23 01:17:20.532312 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4160a92b600cd1344d5db0ba579289e45a8a2fd1abc12f51915447803b1a5181', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-23 01:17:33.525035 | orchestrator | 2026-03-23 01:17:33.525088 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-23 01:17:33.525094 | orchestrator | Monday 23 March 2026 01:17:20 +0000 (0:00:00.588) 0:00:04.919 ********** 2026-03-23 01:17:33.525098 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525103 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:33.525106 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:33.525110 | orchestrator | 2026-03-23 01:17:33.525114 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-23 01:17:33.525118 | orchestrator | Monday 23 March 2026 01:17:21 +0000 (0:00:00.282) 0:00:05.201 ********** 2026-03-23 01:17:33.525122 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:33.525126 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:17:33.525130 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:17:33.525134 | orchestrator | 2026-03-23 01:17:33.525137 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-23 01:17:33.525141 | orchestrator | Monday 23 March 2026 01:17:21 +0000 (0:00:00.283) 0:00:05.485 ********** 2026-03-23 01:17:33.525145 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525149 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:33.525153 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:33.525156 | orchestrator | 2026-03-23 01:17:33.525160 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-23 01:17:33.525164 | orchestrator | Monday 23 March 2026 01:17:21 +0000 (0:00:00.280) 0:00:05.766 ********** 2026-03-23 01:17:33.525168 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525171 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:33.525175 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:33.525179 | orchestrator | 2026-03-23 01:17:33.525183 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-23 01:17:33.525187 | orchestrator | Monday 23 March 2026 01:17:22 +0000 (0:00:00.443) 0:00:06.209 ********** 2026-03-23 01:17:33.525190 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-23 01:17:33.525195 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-23 01:17:33.525198 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:33.525202 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-23 01:17:33.525206 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-23 01:17:33.525210 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:17:33.525214 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-23 01:17:33.525217 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-23 01:17:33.525221 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:17:33.525225 | orchestrator | 2026-03-23 01:17:33.525229 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-23 01:17:33.525232 | orchestrator | Monday 23 March 2026 01:17:22 +0000 (0:00:00.310) 0:00:06.520 ********** 2026-03-23 01:17:33.525236 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525251 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:33.525255 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:33.525259 | orchestrator | 2026-03-23 01:17:33.525263 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-23 01:17:33.525267 | orchestrator | Monday 23 March 2026 01:17:22 +0000 (0:00:00.289) 0:00:06.810 ********** 2026-03-23 01:17:33.525274 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:33.525284 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:17:33.525291 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:17:33.525297 | orchestrator | 2026-03-23 01:17:33.525303 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-23 01:17:33.525309 | orchestrator | Monday 23 March 2026 01:17:22 +0000 (0:00:00.287) 0:00:07.097 ********** 2026-03-23 01:17:33.525316 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:33.525322 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:17:33.525329 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:17:33.525338 | orchestrator | 2026-03-23 01:17:33.525345 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-23 01:17:33.525351 | orchestrator | Monday 23 March 2026 01:17:23 +0000 (0:00:00.455) 0:00:07.553 ********** 2026-03-23 01:17:33.525357 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525363 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:33.525369 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:33.525375 | orchestrator | 2026-03-23 01:17:33.525381 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-23 01:17:33.525387 | orchestrator | Monday 23 March 2026 01:17:23 +0000 (0:00:00.319) 0:00:07.872 ********** 2026-03-23 01:17:33.525393 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:33.525399 | orchestrator | 2026-03-23 01:17:33.525405 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-23 01:17:33.525420 | orchestrator | Monday 23 March 2026 01:17:24 +0000 (0:00:00.305) 0:00:08.178 ********** 2026-03-23 01:17:33.525427 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:33.525435 | orchestrator | 2026-03-23 01:17:33.525439 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-23 01:17:33.525443 | orchestrator | Monday 23 March 2026 01:17:24 +0000 (0:00:00.251) 0:00:08.430 ********** 2026-03-23 01:17:33.525447 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:33.525451 | orchestrator | 2026-03-23 01:17:33.525454 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:17:33.525458 | orchestrator | Monday 23 March 2026 01:17:24 +0000 (0:00:00.261) 0:00:08.692 ********** 2026-03-23 01:17:33.525462 | orchestrator | 2026-03-23 01:17:33.525466 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:17:33.525469 | orchestrator | Monday 23 March 2026 01:17:24 +0000 (0:00:00.068) 0:00:08.760 ********** 2026-03-23 01:17:33.525473 | orchestrator | 2026-03-23 01:17:33.525477 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:17:33.525489 | orchestrator | Monday 23 March 2026 01:17:24 +0000 (0:00:00.065) 0:00:08.826 ********** 2026-03-23 01:17:33.525493 | orchestrator | 2026-03-23 01:17:33.525497 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-23 01:17:33.525500 | orchestrator | Monday 23 March 2026 01:17:24 +0000 (0:00:00.070) 0:00:08.896 ********** 2026-03-23 01:17:33.525504 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:33.525508 | orchestrator | 2026-03-23 01:17:33.525512 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-23 01:17:33.525515 | orchestrator | Monday 23 March 2026 01:17:25 +0000 (0:00:00.588) 0:00:09.485 ********** 2026-03-23 01:17:33.525519 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:33.525523 | orchestrator | 2026-03-23 01:17:33.525526 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-23 01:17:33.525530 | orchestrator | Monday 23 March 2026 01:17:25 +0000 (0:00:00.247) 0:00:09.733 ********** 2026-03-23 01:17:33.525534 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525543 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:33.525547 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:33.525551 | orchestrator | 2026-03-23 01:17:33.525555 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-23 01:17:33.525559 | orchestrator | Monday 23 March 2026 01:17:25 +0000 (0:00:00.300) 0:00:10.034 ********** 2026-03-23 01:17:33.525562 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525566 | orchestrator | 2026-03-23 01:17:33.525570 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-23 01:17:33.525573 | orchestrator | Monday 23 March 2026 01:17:26 +0000 (0:00:00.234) 0:00:10.268 ********** 2026-03-23 01:17:33.525615 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-23 01:17:33.525622 | orchestrator | 2026-03-23 01:17:33.525628 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-23 01:17:33.525634 | orchestrator | Monday 23 March 2026 01:17:28 +0000 (0:00:02.213) 0:00:12.482 ********** 2026-03-23 01:17:33.525640 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525646 | orchestrator | 2026-03-23 01:17:33.525652 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-23 01:17:33.525658 | orchestrator | Monday 23 March 2026 01:17:28 +0000 (0:00:00.127) 0:00:12.609 ********** 2026-03-23 01:17:33.525664 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525671 | orchestrator | 2026-03-23 01:17:33.525677 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-23 01:17:33.525683 | orchestrator | Monday 23 March 2026 01:17:28 +0000 (0:00:00.298) 0:00:12.908 ********** 2026-03-23 01:17:33.525689 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:33.525694 | orchestrator | 2026-03-23 01:17:33.525701 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-23 01:17:33.525706 | orchestrator | Monday 23 March 2026 01:17:28 +0000 (0:00:00.127) 0:00:13.036 ********** 2026-03-23 01:17:33.525712 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525718 | orchestrator | 2026-03-23 01:17:33.525724 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-23 01:17:33.525730 | orchestrator | Monday 23 March 2026 01:17:29 +0000 (0:00:00.137) 0:00:13.173 ********** 2026-03-23 01:17:33.525736 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525744 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:33.525750 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:33.525757 | orchestrator | 2026-03-23 01:17:33.525764 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-23 01:17:33.525770 | orchestrator | Monday 23 March 2026 01:17:29 +0000 (0:00:00.492) 0:00:13.666 ********** 2026-03-23 01:17:33.525777 | orchestrator | changed: [testbed-node-3] 2026-03-23 01:17:33.525784 | orchestrator | changed: [testbed-node-4] 2026-03-23 01:17:33.525788 | orchestrator | changed: [testbed-node-5] 2026-03-23 01:17:33.525793 | orchestrator | 2026-03-23 01:17:33.525797 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-23 01:17:33.525802 | orchestrator | Monday 23 March 2026 01:17:31 +0000 (0:00:01.514) 0:00:15.180 ********** 2026-03-23 01:17:33.525806 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525811 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:33.525815 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:33.525820 | orchestrator | 2026-03-23 01:17:33.525824 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-23 01:17:33.525828 | orchestrator | Monday 23 March 2026 01:17:31 +0000 (0:00:00.285) 0:00:15.466 ********** 2026-03-23 01:17:33.525832 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525837 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:33.525841 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:33.525845 | orchestrator | 2026-03-23 01:17:33.525850 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-23 01:17:33.525854 | orchestrator | Monday 23 March 2026 01:17:32 +0000 (0:00:00.888) 0:00:16.355 ********** 2026-03-23 01:17:33.525858 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:33.525868 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:17:33.525873 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:17:33.525877 | orchestrator | 2026-03-23 01:17:33.525881 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-23 01:17:33.525890 | orchestrator | Monday 23 March 2026 01:17:32 +0000 (0:00:00.313) 0:00:16.668 ********** 2026-03-23 01:17:33.525894 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:33.525899 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:33.525903 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:33.525907 | orchestrator | 2026-03-23 01:17:33.525912 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-23 01:17:33.525916 | orchestrator | Monday 23 March 2026 01:17:32 +0000 (0:00:00.289) 0:00:16.957 ********** 2026-03-23 01:17:33.525920 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:33.525924 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:17:33.525929 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:17:33.525933 | orchestrator | 2026-03-23 01:17:33.525938 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-23 01:17:33.525942 | orchestrator | Monday 23 March 2026 01:17:33 +0000 (0:00:00.289) 0:00:17.247 ********** 2026-03-23 01:17:33.525946 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:33.525951 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:17:33.525955 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:17:33.525959 | orchestrator | 2026-03-23 01:17:33.525968 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-23 01:17:40.712662 | orchestrator | Monday 23 March 2026 01:17:33 +0000 (0:00:00.434) 0:00:17.681 ********** 2026-03-23 01:17:40.712775 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:40.712782 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:40.712787 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:40.712791 | orchestrator | 2026-03-23 01:17:40.712796 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-23 01:17:40.712801 | orchestrator | Monday 23 March 2026 01:17:34 +0000 (0:00:00.561) 0:00:18.243 ********** 2026-03-23 01:17:40.712806 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:40.712810 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:40.712814 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:40.712817 | orchestrator | 2026-03-23 01:17:40.712821 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-23 01:17:40.712826 | orchestrator | Monday 23 March 2026 01:17:34 +0000 (0:00:00.498) 0:00:18.741 ********** 2026-03-23 01:17:40.712830 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:40.712833 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:40.712837 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:40.712841 | orchestrator | 2026-03-23 01:17:40.712845 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-23 01:17:40.712848 | orchestrator | Monday 23 March 2026 01:17:34 +0000 (0:00:00.304) 0:00:19.046 ********** 2026-03-23 01:17:40.712852 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:40.712857 | orchestrator | skipping: [testbed-node-4] 2026-03-23 01:17:40.712861 | orchestrator | skipping: [testbed-node-5] 2026-03-23 01:17:40.712865 | orchestrator | 2026-03-23 01:17:40.712869 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-23 01:17:40.712873 | orchestrator | Monday 23 March 2026 01:17:35 +0000 (0:00:00.470) 0:00:19.516 ********** 2026-03-23 01:17:40.712877 | orchestrator | ok: [testbed-node-3] 2026-03-23 01:17:40.712880 | orchestrator | ok: [testbed-node-4] 2026-03-23 01:17:40.712884 | orchestrator | ok: [testbed-node-5] 2026-03-23 01:17:40.712888 | orchestrator | 2026-03-23 01:17:40.712892 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-23 01:17:40.712895 | orchestrator | Monday 23 March 2026 01:17:35 +0000 (0:00:00.295) 0:00:19.812 ********** 2026-03-23 01:17:40.712899 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:40.712904 | orchestrator | 2026-03-23 01:17:40.712907 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-23 01:17:40.712933 | orchestrator | Monday 23 March 2026 01:17:35 +0000 (0:00:00.240) 0:00:20.052 ********** 2026-03-23 01:17:40.712937 | orchestrator | skipping: [testbed-node-3] 2026-03-23 01:17:40.712941 | orchestrator | 2026-03-23 01:17:40.712944 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-23 01:17:40.712948 | orchestrator | Monday 23 March 2026 01:17:36 +0000 (0:00:00.230) 0:00:20.283 ********** 2026-03-23 01:17:40.712952 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:40.712956 | orchestrator | 2026-03-23 01:17:40.712959 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-23 01:17:40.712963 | orchestrator | Monday 23 March 2026 01:17:37 +0000 (0:00:01.678) 0:00:21.961 ********** 2026-03-23 01:17:40.712967 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:40.712971 | orchestrator | 2026-03-23 01:17:40.712975 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-23 01:17:40.712978 | orchestrator | Monday 23 March 2026 01:17:38 +0000 (0:00:00.266) 0:00:22.228 ********** 2026-03-23 01:17:40.712982 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:40.712986 | orchestrator | 2026-03-23 01:17:40.712990 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:17:40.712993 | orchestrator | Monday 23 March 2026 01:17:38 +0000 (0:00:00.254) 0:00:22.483 ********** 2026-03-23 01:17:40.712997 | orchestrator | 2026-03-23 01:17:40.713013 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:17:40.713027 | orchestrator | Monday 23 March 2026 01:17:38 +0000 (0:00:00.225) 0:00:22.708 ********** 2026-03-23 01:17:40.713033 | orchestrator | 2026-03-23 01:17:40.713039 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-23 01:17:40.713046 | orchestrator | Monday 23 March 2026 01:17:38 +0000 (0:00:00.065) 0:00:22.774 ********** 2026-03-23 01:17:40.713052 | orchestrator | 2026-03-23 01:17:40.713058 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-23 01:17:40.713065 | orchestrator | Monday 23 March 2026 01:17:38 +0000 (0:00:00.072) 0:00:22.846 ********** 2026-03-23 01:17:40.713071 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-23 01:17:40.713078 | orchestrator | 2026-03-23 01:17:40.713083 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-23 01:17:40.713087 | orchestrator | Monday 23 March 2026 01:17:39 +0000 (0:00:01.267) 0:00:24.114 ********** 2026-03-23 01:17:40.713091 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-23 01:17:40.713095 | orchestrator |  "msg": [ 2026-03-23 01:17:40.713100 | orchestrator |  "Validator run completed.", 2026-03-23 01:17:40.713104 | orchestrator |  "You can find the report file here:", 2026-03-23 01:17:40.713109 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-23T01:17:17+00:00-report.json", 2026-03-23 01:17:40.713114 | orchestrator |  "on the following host:", 2026-03-23 01:17:40.713119 | orchestrator |  "testbed-manager" 2026-03-23 01:17:40.713123 | orchestrator |  ] 2026-03-23 01:17:40.713127 | orchestrator | } 2026-03-23 01:17:40.713131 | orchestrator | 2026-03-23 01:17:40.713134 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:17:40.713140 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-23 01:17:40.713146 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-23 01:17:40.713165 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-23 01:17:40.713169 | orchestrator | 2026-03-23 01:17:40.713174 | orchestrator | 2026-03-23 01:17:40.713178 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:17:40.713233 | orchestrator | Monday 23 March 2026 01:17:40 +0000 (0:00:00.437) 0:00:24.551 ********** 2026-03-23 01:17:40.713238 | orchestrator | =============================================================================== 2026-03-23 01:17:40.713242 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.21s 2026-03-23 01:17:40.713247 | orchestrator | Aggregate test results step one ----------------------------------------- 1.68s 2026-03-23 01:17:40.713251 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.51s 2026-03-23 01:17:40.713256 | orchestrator | Write report file ------------------------------------------------------- 1.27s 2026-03-23 01:17:40.713260 | orchestrator | Get timestamp for report file ------------------------------------------- 1.03s 2026-03-23 01:17:40.713264 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.89s 2026-03-23 01:17:40.713269 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-03-23 01:17:40.713273 | orchestrator | Print report file information ------------------------------------------- 0.59s 2026-03-23 01:17:40.713278 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.59s 2026-03-23 01:17:40.713282 | orchestrator | Prepare test data ------------------------------------------------------- 0.56s 2026-03-23 01:17:40.713287 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.50s 2026-03-23 01:17:40.713291 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2026-03-23 01:17:40.713296 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.47s 2026-03-23 01:17:40.713300 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.46s 2026-03-23 01:17:40.713304 | orchestrator | Prepare test data ------------------------------------------------------- 0.44s 2026-03-23 01:17:40.713308 | orchestrator | Print report file information ------------------------------------------- 0.44s 2026-03-23 01:17:40.713313 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.43s 2026-03-23 01:17:40.713317 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.43s 2026-03-23 01:17:40.713321 | orchestrator | Flush handlers ---------------------------------------------------------- 0.36s 2026-03-23 01:17:40.713326 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.35s 2026-03-23 01:17:40.888991 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-23 01:17:40.895279 | orchestrator | + set -e 2026-03-23 01:17:40.895341 | orchestrator | + source /opt/manager-vars.sh 2026-03-23 01:17:40.895347 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-23 01:17:40.895352 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-23 01:17:40.895357 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-23 01:17:40.895361 | orchestrator | ++ CEPH_VERSION=reef 2026-03-23 01:17:40.895365 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-23 01:17:40.895370 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-23 01:17:40.895387 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-23 01:17:40.895392 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-23 01:17:40.895402 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-23 01:17:40.895406 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-23 01:17:40.895410 | orchestrator | ++ export ARA=false 2026-03-23 01:17:40.895414 | orchestrator | ++ ARA=false 2026-03-23 01:17:40.895418 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-23 01:17:40.895422 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-23 01:17:40.895426 | orchestrator | ++ export TEMPEST=true 2026-03-23 01:17:40.895430 | orchestrator | ++ TEMPEST=true 2026-03-23 01:17:40.895434 | orchestrator | ++ export IS_ZUUL=true 2026-03-23 01:17:40.895438 | orchestrator | ++ IS_ZUUL=true 2026-03-23 01:17:40.895442 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.169 2026-03-23 01:17:40.895446 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.169 2026-03-23 01:17:40.895450 | orchestrator | ++ export EXTERNAL_API=false 2026-03-23 01:17:40.895453 | orchestrator | ++ EXTERNAL_API=false 2026-03-23 01:17:40.895457 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-23 01:17:40.895461 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-23 01:17:40.895465 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-23 01:17:40.895469 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-23 01:17:40.895493 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-23 01:17:40.895497 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-23 01:17:40.895500 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-23 01:17:40.895504 | orchestrator | + source /etc/os-release 2026-03-23 01:17:40.895508 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-23 01:17:40.895512 | orchestrator | ++ NAME=Ubuntu 2026-03-23 01:17:40.895515 | orchestrator | ++ VERSION_ID=24.04 2026-03-23 01:17:40.895519 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-23 01:17:40.895524 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-23 01:17:40.895528 | orchestrator | ++ ID=ubuntu 2026-03-23 01:17:40.895532 | orchestrator | ++ ID_LIKE=debian 2026-03-23 01:17:40.895536 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-23 01:17:40.895539 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-23 01:17:40.895543 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-23 01:17:40.895547 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-23 01:17:40.895552 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-23 01:17:40.895556 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-23 01:17:40.895559 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-23 01:17:40.895574 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-23 01:17:40.895580 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-23 01:17:40.928458 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-23 01:18:01.123727 | orchestrator | 2026-03-23 01:18:01.123815 | orchestrator | # Status of Elasticsearch 2026-03-23 01:18:01.123827 | orchestrator | 2026-03-23 01:18:01.123835 | orchestrator | + pushd /opt/configuration/contrib 2026-03-23 01:18:01.123843 | orchestrator | + echo 2026-03-23 01:18:01.123851 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-23 01:18:01.123859 | orchestrator | + echo 2026-03-23 01:18:01.123867 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-23 01:18:01.286358 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-23 01:18:01.286997 | orchestrator | 2026-03-23 01:18:01.287021 | orchestrator | # Status of MariaDB 2026-03-23 01:18:01.287030 | orchestrator | 2026-03-23 01:18:01.287038 | orchestrator | + echo 2026-03-23 01:18:01.287044 | orchestrator | + echo '# Status of MariaDB' 2026-03-23 01:18:01.287050 | orchestrator | + echo 2026-03-23 01:18:01.287634 | orchestrator | ++ semver latest 10.0.0-0 2026-03-23 01:18:01.346841 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-23 01:18:01.346902 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-23 01:18:01.346907 | orchestrator | + osism status database 2026-03-23 01:18:02.921997 | orchestrator | 2026-03-23 01:18:02 | ERROR  | Unable to get ansible vault password 2026-03-23 01:18:02.922155 | orchestrator | 2026-03-23 01:18:02 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:18:02.922165 | orchestrator | 2026-03-23 01:18:02 | ERROR  | Dropping encrypted entries 2026-03-23 01:18:02.956445 | orchestrator | 2026-03-23 01:18:02 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-03-23 01:18:02.969518 | orchestrator | 2026-03-23 01:18:02 | INFO  | Cluster Status: Primary 2026-03-23 01:18:02.969673 | orchestrator | 2026-03-23 01:18:02 | INFO  | Connected: ON 2026-03-23 01:18:02.969685 | orchestrator | 2026-03-23 01:18:02 | INFO  | Ready: ON 2026-03-23 01:18:02.969693 | orchestrator | 2026-03-23 01:18:02 | INFO  | Cluster Size: 3 2026-03-23 01:18:02.969802 | orchestrator | 2026-03-23 01:18:02 | INFO  | Local State: Synced 2026-03-23 01:18:02.969914 | orchestrator | 2026-03-23 01:18:02 | INFO  | Cluster State UUID: f83d4da2-2652-11f1-85e6-3aa12b67befa 2026-03-23 01:18:02.970558 | orchestrator | 2026-03-23 01:18:02 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-03-23 01:18:02.970747 | orchestrator | 2026-03-23 01:18:02 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-03-23 01:18:02.970992 | orchestrator | 2026-03-23 01:18:02 | INFO  | Local Node UUID: 2bd22654-2653-11f1-b363-86e7dc24faf4 2026-03-23 01:18:02.971619 | orchestrator | 2026-03-23 01:18:02 | INFO  | Flow Control Paused: 0.00% 2026-03-23 01:18:02.971864 | orchestrator | 2026-03-23 01:18:02 | INFO  | Recv Queue Avg: 0.0113636 2026-03-23 01:18:02.971890 | orchestrator | 2026-03-23 01:18:02 | INFO  | Send Queue Avg: 0.000612839 2026-03-23 01:18:02.971895 | orchestrator | 2026-03-23 01:18:02 | INFO  | Transactions: 4285 local commits, 6470 replicated, 88 received 2026-03-23 01:18:02.972069 | orchestrator | 2026-03-23 01:18:02 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-03-23 01:18:02.972412 | orchestrator | 2026-03-23 01:18:02 | INFO  | MariaDB Uptime: 21 minutes, 14 seconds 2026-03-23 01:18:02.972692 | orchestrator | 2026-03-23 01:18:02 | INFO  | Threads: 131 connected, 1 running 2026-03-23 01:18:02.973000 | orchestrator | 2026-03-23 01:18:02 | INFO  | Queries: 203654 total, 0 slow 2026-03-23 01:18:02.973189 | orchestrator | 2026-03-23 01:18:02 | INFO  | Aborted Connects: 132 2026-03-23 01:18:02.973381 | orchestrator | 2026-03-23 01:18:02 | INFO  | MariaDB Galera Cluster validation PASSED 2026-03-23 01:18:03.202434 | orchestrator | 2026-03-23 01:18:03.202554 | orchestrator | # Status of Prometheus 2026-03-23 01:18:03.202565 | orchestrator | 2026-03-23 01:18:03.202585 | orchestrator | + echo 2026-03-23 01:18:03.202599 | orchestrator | + echo '# Status of Prometheus' 2026-03-23 01:18:03.202607 | orchestrator | + echo 2026-03-23 01:18:03.202613 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-23 01:18:03.252633 | orchestrator | Unauthorized 2026-03-23 01:18:03.255765 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-23 01:18:03.303946 | orchestrator | Unauthorized 2026-03-23 01:18:03.306149 | orchestrator | 2026-03-23 01:18:03.306246 | orchestrator | # Status of RabbitMQ 2026-03-23 01:18:03.306253 | orchestrator | 2026-03-23 01:18:03.306258 | orchestrator | + echo 2026-03-23 01:18:03.306263 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-23 01:18:03.306267 | orchestrator | + echo 2026-03-23 01:18:03.307528 | orchestrator | ++ semver latest 10.0.0-0 2026-03-23 01:18:03.358304 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-23 01:18:03.358415 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-23 01:18:03.358426 | orchestrator | + osism status messaging 2026-03-23 01:18:10.453374 | orchestrator | 2026-03-23 01:18:10 | ERROR  | Unable to get ansible vault password 2026-03-23 01:18:10.453424 | orchestrator | 2026-03-23 01:18:10 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:18:10.453429 | orchestrator | 2026-03-23 01:18:10 | ERROR  | Dropping encrypted entries 2026-03-23 01:18:10.493101 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-03-23 01:18:10.532118 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-03-23 01:18:10.532300 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-03-23 01:18:10.532312 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-03-23 01:18:10.532316 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] Cluster Size: 3 2026-03-23 01:18:10.532321 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-23 01:18:10.532485 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-23 01:18:10.532579 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-03-23 01:18:10.532928 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] Connections: 200, Channels: 199, Queues: 173 2026-03-23 01:18:10.533003 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] Messages: 233 total, 232 ready, 1 unacked 2026-03-23 01:18:10.533181 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] Message Rates: 8.4/s publish, 8.6/s deliver 2026-03-23 01:18:10.533421 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] Disk Free: 58.0 GB (limit: 0.0 GB) 2026-03-23 01:18:10.533544 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-03-23 01:18:10.533845 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] File Descriptors: 119/1024 2026-03-23 01:18:10.533856 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-0] Sockets: 73/832 2026-03-23 01:18:10.533983 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-03-23 01:18:10.572186 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-03-23 01:18:10.572237 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-03-23 01:18:10.572243 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-03-23 01:18:10.572247 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] Cluster Size: 3 2026-03-23 01:18:10.572252 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-23 01:18:10.572305 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-23 01:18:10.572315 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-03-23 01:18:10.572319 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] Connections: 200, Channels: 199, Queues: 173 2026-03-23 01:18:10.572323 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] Messages: 233 total, 232 ready, 1 unacked 2026-03-23 01:18:10.572472 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] Message Rates: 8.4/s publish, 8.6/s deliver 2026-03-23 01:18:10.572482 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] Disk Free: 58.4 GB (limit: 0.0 GB) 2026-03-23 01:18:10.572486 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-03-23 01:18:10.572490 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] File Descriptors: 108/1024 2026-03-23 01:18:10.572677 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-1] Sockets: 62/832 2026-03-23 01:18:10.572684 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-03-23 01:18:10.626488 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-03-23 01:18:10.626548 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-03-23 01:18:10.626557 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-03-23 01:18:10.626564 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] Cluster Size: 3 2026-03-23 01:18:10.626571 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-23 01:18:10.626652 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-23 01:18:10.626664 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-03-23 01:18:10.626679 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] Connections: 200, Channels: 199, Queues: 173 2026-03-23 01:18:10.626690 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] Messages: 233 total, 232 ready, 1 unacked 2026-03-23 01:18:10.626705 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] Message Rates: 8.4/s publish, 8.6/s deliver 2026-03-23 01:18:10.626719 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-03-23 01:18:10.626730 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-03-23 01:18:10.626756 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] File Descriptors: 113/1024 2026-03-23 01:18:10.626767 | orchestrator | 2026-03-23 01:18:10 | INFO  | [testbed-node-2] Sockets: 65/832 2026-03-23 01:18:10.626788 | orchestrator | 2026-03-23 01:18:10 | INFO  | RabbitMQ Cluster validation PASSED 2026-03-23 01:18:10.888051 | orchestrator | 2026-03-23 01:18:10.888104 | orchestrator | # Status of Redis 2026-03-23 01:18:10.888112 | orchestrator | 2026-03-23 01:18:10.888118 | orchestrator | + echo 2026-03-23 01:18:10.888123 | orchestrator | + echo '# Status of Redis' 2026-03-23 01:18:10.888129 | orchestrator | + echo 2026-03-23 01:18:10.888135 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-23 01:18:10.892373 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001314s;;;0.000000;10.000000 2026-03-23 01:18:10.893290 | orchestrator | 2026-03-23 01:18:10.893325 | orchestrator | # Create backup of MariaDB database 2026-03-23 01:18:10.893333 | orchestrator | 2026-03-23 01:18:10.893341 | orchestrator | + popd 2026-03-23 01:18:10.893348 | orchestrator | + echo 2026-03-23 01:18:10.893355 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-23 01:18:10.893360 | orchestrator | + echo 2026-03-23 01:18:10.893366 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-23 01:18:12.233260 | orchestrator | 2026-03-23 01:18:12 | INFO  | Prepare task for execution of mariadb_backup. 2026-03-23 01:18:12.304562 | orchestrator | 2026-03-23 01:18:12 | INFO  | Task d9a5aff0-fe8d-4b0f-a534-b1ded88984f7 (mariadb_backup) was prepared for execution. 2026-03-23 01:18:12.304620 | orchestrator | 2026-03-23 01:18:12 | INFO  | It takes a moment until task d9a5aff0-fe8d-4b0f-a534-b1ded88984f7 (mariadb_backup) has been started and output is visible here. 2026-03-23 01:18:38.505309 | orchestrator | 2026-03-23 01:18:38.505396 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-23 01:18:38.505403 | orchestrator | 2026-03-23 01:18:38.505408 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-23 01:18:38.505413 | orchestrator | Monday 23 March 2026 01:18:15 +0000 (0:00:00.252) 0:00:00.252 ********** 2026-03-23 01:18:38.505418 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:18:38.505422 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:18:38.505427 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:18:38.505430 | orchestrator | 2026-03-23 01:18:38.505434 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-23 01:18:38.505438 | orchestrator | Monday 23 March 2026 01:18:15 +0000 (0:00:00.353) 0:00:00.606 ********** 2026-03-23 01:18:38.505445 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-23 01:18:38.505500 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-23 01:18:38.505506 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-23 01:18:38.505535 | orchestrator | 2026-03-23 01:18:38.505542 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-23 01:18:38.505549 | orchestrator | 2026-03-23 01:18:38.505555 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-23 01:18:38.505561 | orchestrator | Monday 23 March 2026 01:18:16 +0000 (0:00:00.409) 0:00:01.015 ********** 2026-03-23 01:18:38.505567 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-23 01:18:38.505573 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-23 01:18:38.505578 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-23 01:18:38.505587 | orchestrator | 2026-03-23 01:18:38.505596 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-23 01:18:38.505602 | orchestrator | Monday 23 March 2026 01:18:16 +0000 (0:00:00.398) 0:00:01.413 ********** 2026-03-23 01:18:38.505609 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-23 01:18:38.505616 | orchestrator | 2026-03-23 01:18:38.505622 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-23 01:18:38.505628 | orchestrator | Monday 23 March 2026 01:18:17 +0000 (0:00:00.634) 0:00:02.048 ********** 2026-03-23 01:18:38.505634 | orchestrator | ok: [testbed-node-0] 2026-03-23 01:18:38.505640 | orchestrator | ok: [testbed-node-1] 2026-03-23 01:18:38.505645 | orchestrator | ok: [testbed-node-2] 2026-03-23 01:18:38.505651 | orchestrator | 2026-03-23 01:18:38.505657 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-23 01:18:38.505664 | orchestrator | Monday 23 March 2026 01:18:20 +0000 (0:00:03.057) 0:00:05.106 ********** 2026-03-23 01:18:38.505670 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:18:38.505677 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:18:38.505684 | orchestrator | changed: [testbed-node-0] 2026-03-23 01:18:38.505690 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-23 01:18:38.505696 | orchestrator | 2026-03-23 01:18:38.505702 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-23 01:18:38.505707 | orchestrator | skipping: no hosts matched 2026-03-23 01:18:38.505713 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-23 01:18:38.505719 | orchestrator | 2026-03-23 01:18:38.505726 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-23 01:18:38.505732 | orchestrator | skipping: no hosts matched 2026-03-23 01:18:38.505739 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-23 01:18:38.505745 | orchestrator | mariadb_bootstrap_restart 2026-03-23 01:18:38.505751 | orchestrator | 2026-03-23 01:18:38.505758 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-23 01:18:38.505764 | orchestrator | skipping: no hosts matched 2026-03-23 01:18:38.505770 | orchestrator | 2026-03-23 01:18:38.505777 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-23 01:18:38.505783 | orchestrator | 2026-03-23 01:18:38.505789 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-23 01:18:38.505815 | orchestrator | Monday 23 March 2026 01:18:37 +0000 (0:00:17.461) 0:00:22.567 ********** 2026-03-23 01:18:38.505822 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:18:38.505828 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:18:38.505835 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:18:38.505841 | orchestrator | 2026-03-23 01:18:38.505869 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-23 01:18:38.505877 | orchestrator | Monday 23 March 2026 01:18:38 +0000 (0:00:00.277) 0:00:22.845 ********** 2026-03-23 01:18:38.505883 | orchestrator | skipping: [testbed-node-0] 2026-03-23 01:18:38.505889 | orchestrator | skipping: [testbed-node-1] 2026-03-23 01:18:38.505894 | orchestrator | skipping: [testbed-node-2] 2026-03-23 01:18:38.505900 | orchestrator | 2026-03-23 01:18:38.505905 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:18:38.505922 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-23 01:18:38.505930 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-23 01:18:38.505938 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-23 01:18:38.505945 | orchestrator | 2026-03-23 01:18:38.505951 | orchestrator | 2026-03-23 01:18:38.505967 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:18:38.505974 | orchestrator | Monday 23 March 2026 01:18:38 +0000 (0:00:00.196) 0:00:23.042 ********** 2026-03-23 01:18:38.505988 | orchestrator | =============================================================================== 2026-03-23 01:18:38.505997 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.46s 2026-03-23 01:18:38.506091 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.06s 2026-03-23 01:18:38.506100 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.63s 2026-03-23 01:18:38.506105 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-03-23 01:18:38.506109 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2026-03-23 01:18:38.506114 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-03-23 01:18:38.506118 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.28s 2026-03-23 01:18:38.506123 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.20s 2026-03-23 01:18:38.681966 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-23 01:18:38.690839 | orchestrator | + set -e 2026-03-23 01:18:38.690951 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-23 01:18:38.690962 | orchestrator | ++ export INTERACTIVE=false 2026-03-23 01:18:38.690970 | orchestrator | ++ INTERACTIVE=false 2026-03-23 01:18:38.690976 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-23 01:18:38.690982 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-23 01:18:38.690988 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-23 01:18:38.691195 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-23 01:18:38.697629 | orchestrator | 2026-03-23 01:18:38.697692 | orchestrator | # OpenStack endpoints 2026-03-23 01:18:38.697698 | orchestrator | 2026-03-23 01:18:38.697709 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-23 01:18:38.697714 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-23 01:18:38.697718 | orchestrator | + export OS_CLOUD=admin 2026-03-23 01:18:38.697722 | orchestrator | + OS_CLOUD=admin 2026-03-23 01:18:38.697726 | orchestrator | + echo 2026-03-23 01:18:38.697730 | orchestrator | + echo '# OpenStack endpoints' 2026-03-23 01:18:38.697734 | orchestrator | + echo 2026-03-23 01:18:38.697738 | orchestrator | + openstack endpoint list 2026-03-23 01:18:41.840938 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-23 01:18:41.841000 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-23 01:18:41.841007 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-23 01:18:41.841011 | orchestrator | | 0e4f8ef088274b2e912a6b1c8c3c10f0 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-23 01:18:41.841015 | orchestrator | | 195cfacce42442789de46ffdafabdcc7 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-23 01:18:41.841028 | orchestrator | | 284a836d442e434eb00d844d06b05786 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-23 01:18:41.841043 | orchestrator | | 30487f46ee8243c283d22188eac29fc9 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-23 01:18:41.841049 | orchestrator | | 562532d5bca54718a206d3e3da1387f3 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-23 01:18:41.841059 | orchestrator | | 5801b170c24b4a9b805ebfb9427efe33 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-23 01:18:41.841067 | orchestrator | | 5cd1525a3f4c4361afbb1130738e77ba | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-23 01:18:41.841072 | orchestrator | | 70704d7dc14b40f9894c425139b5c43c | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-23 01:18:41.841078 | orchestrator | | 87e91a6478fe471caa0d1a57d4aaef50 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-23 01:18:41.841084 | orchestrator | | a7552f6631f54f5a91eb20deec7b2c9d | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-23 01:18:41.841090 | orchestrator | | b7fb3952c5684e75a958f4b303173f80 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-23 01:18:41.841096 | orchestrator | | bc3eda2908b04c56b6ffbfbd97d74389 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-23 01:18:41.841101 | orchestrator | | d0669f98046640fabcd0d53078aac12e | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-23 01:18:41.841106 | orchestrator | | d24cf131f1124fc99809d5a67ca45142 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-23 01:18:41.841112 | orchestrator | | d265b9c724a9418e97fca53dd803603b | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-23 01:18:41.841118 | orchestrator | | d48a1810aa614666ac10c7381262ece6 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-23 01:18:41.841134 | orchestrator | | d7e57b23b3db4b6ba7c45076727aa47f | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-23 01:18:41.841140 | orchestrator | | e416548d1f594a75981439777aa8ca39 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-23 01:18:41.841146 | orchestrator | | e49859c0945a4773865700be7a315910 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-23 01:18:41.841152 | orchestrator | | e4c4245967de42fbad6000610b992864 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-23 01:18:41.841169 | orchestrator | | f8fae00fff8f4cc396e3ced4f0af33a7 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-23 01:18:41.841173 | orchestrator | | f929d98d3cab4b95b64f11036aab2f42 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-23 01:18:41.841177 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-23 01:18:42.067718 | orchestrator | 2026-03-23 01:18:42.067773 | orchestrator | # Cinder 2026-03-23 01:18:42.067782 | orchestrator | 2026-03-23 01:18:42.067789 | orchestrator | + echo 2026-03-23 01:18:42.067796 | orchestrator | + echo '# Cinder' 2026-03-23 01:18:42.067803 | orchestrator | + echo 2026-03-23 01:18:42.067809 | orchestrator | + openstack volume service list 2026-03-23 01:18:44.468987 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-23 01:18:44.469050 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-23 01:18:44.469058 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-23 01:18:44.469074 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-23T01:18:40.000000 | 2026-03-23 01:18:44.469079 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-23T01:18:40.000000 | 2026-03-23 01:18:44.469085 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-23T01:18:39.000000 | 2026-03-23 01:18:44.469090 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-23T01:18:39.000000 | 2026-03-23 01:18:44.469096 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-23T01:18:39.000000 | 2026-03-23 01:18:44.469102 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-23T01:18:40.000000 | 2026-03-23 01:18:44.469107 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-23T01:18:37.000000 | 2026-03-23 01:18:44.469112 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-23T01:18:39.000000 | 2026-03-23 01:18:44.469117 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-23T01:18:39.000000 | 2026-03-23 01:18:44.469122 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-23 01:18:44.730703 | orchestrator | 2026-03-23 01:18:44.730748 | orchestrator | # Neutron 2026-03-23 01:18:44.730753 | orchestrator | 2026-03-23 01:18:44.730757 | orchestrator | + echo 2026-03-23 01:18:44.730761 | orchestrator | + echo '# Neutron' 2026-03-23 01:18:44.730764 | orchestrator | + echo 2026-03-23 01:18:44.730767 | orchestrator | + openstack network agent list 2026-03-23 01:18:47.322866 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-23 01:18:47.322933 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-23 01:18:47.322939 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-23 01:18:47.322943 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-23 01:18:47.322947 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-23 01:18:47.322951 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-23 01:18:47.322955 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-23 01:18:47.322959 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-23 01:18:47.322963 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-23 01:18:47.322967 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-23 01:18:47.322983 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-23 01:18:47.322987 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-23 01:18:47.322991 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-23 01:18:47.560666 | orchestrator | + openstack network service provider list 2026-03-23 01:18:49.957552 | orchestrator | +---------------+------+---------+ 2026-03-23 01:18:49.957649 | orchestrator | | Service Type | Name | Default | 2026-03-23 01:18:49.957667 | orchestrator | +---------------+------+---------+ 2026-03-23 01:18:49.957682 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-23 01:18:49.957696 | orchestrator | +---------------+------+---------+ 2026-03-23 01:18:50.212875 | orchestrator | 2026-03-23 01:18:50.213012 | orchestrator | # Nova 2026-03-23 01:18:50.213022 | orchestrator | 2026-03-23 01:18:50.213028 | orchestrator | + echo 2026-03-23 01:18:50.213036 | orchestrator | + echo '# Nova' 2026-03-23 01:18:50.213042 | orchestrator | + echo 2026-03-23 01:18:50.213049 | orchestrator | + openstack compute service list 2026-03-23 01:18:53.430822 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-23 01:18:53.430955 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-23 01:18:53.430969 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-23 01:18:53.430976 | orchestrator | | 7dee5f2e-9c84-4b84-9062-52d32548879f | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-23T01:18:52.000000 | 2026-03-23 01:18:53.430983 | orchestrator | | a0607898-db37-4d73-98d1-d6cb71b1677a | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-23T01:18:45.000000 | 2026-03-23 01:18:53.431010 | orchestrator | | 06d870e9-07ab-47f3-a09a-c8c53916e545 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-23T01:18:51.000000 | 2026-03-23 01:18:53.431021 | orchestrator | | dc397d5a-9f36-4e17-948d-fcde71fe5e2d | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-23T01:18:52.000000 | 2026-03-23 01:18:53.431027 | orchestrator | | 726b62d1-b215-41bb-aa06-903fbc08f7c1 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-23T01:18:43.000000 | 2026-03-23 01:18:53.431033 | orchestrator | | 1cdaa042-597d-4bf9-87d3-dc6905e1cd4a | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-23T01:18:46.000000 | 2026-03-23 01:18:53.431039 | orchestrator | | 4c26ff98-07e4-4456-ba85-ca6fe342c5ea | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-23T01:18:45.000000 | 2026-03-23 01:18:53.431045 | orchestrator | | ff6d312c-2683-434d-93ab-5356014b2c8f | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-23T01:18:45.000000 | 2026-03-23 01:18:53.431052 | orchestrator | | bd308ac7-4007-462b-97d2-15839294ee0a | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-23T01:18:46.000000 | 2026-03-23 01:18:53.431058 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-23 01:18:53.660018 | orchestrator | + openstack hypervisor list 2026-03-23 01:18:56.307204 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-23 01:18:56.307297 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-23 01:18:56.307307 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-23 01:18:56.307315 | orchestrator | | cba56d7d-5ee6-4be7-823c-71e21edb9709 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-23 01:18:56.307321 | orchestrator | | a06d245c-711b-4e24-8f63-206f205b885a | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-23 01:18:56.307351 | orchestrator | | f280cc99-f10f-4193-8220-69c3b87f71a5 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-23 01:18:56.307358 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-23 01:18:56.551279 | orchestrator | 2026-03-23 01:18:56.551334 | orchestrator | # Run OpenStack test play 2026-03-23 01:18:56.551343 | orchestrator | 2026-03-23 01:18:56.551349 | orchestrator | + echo 2026-03-23 01:18:56.551355 | orchestrator | + echo '# Run OpenStack test play' 2026-03-23 01:18:56.551362 | orchestrator | + echo 2026-03-23 01:18:56.551368 | orchestrator | + osism apply --environment openstack test 2026-03-23 01:18:57.800408 | orchestrator | 2026-03-23 01:18:57 | INFO  | Trying to run play test in environment openstack 2026-03-23 01:19:07.900059 | orchestrator | 2026-03-23 01:19:07 | INFO  | Prepare task for execution of test. 2026-03-23 01:19:07.978833 | orchestrator | 2026-03-23 01:19:07 | INFO  | Task 8192cc26-0185-4094-b114-fca0f8064f74 (test) was prepared for execution. 2026-03-23 01:19:07.978939 | orchestrator | 2026-03-23 01:19:07 | INFO  | It takes a moment until task 8192cc26-0185-4094-b114-fca0f8064f74 (test) has been started and output is visible here. 2026-03-23 01:21:48.490142 | orchestrator | 2026-03-23 01:21:48.490238 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-23 01:21:48.490249 | orchestrator | 2026-03-23 01:21:48.490256 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-23 01:21:48.490263 | orchestrator | Monday 23 March 2026 01:19:10 +0000 (0:00:00.105) 0:00:00.105 ********** 2026-03-23 01:21:48.490270 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490277 | orchestrator | 2026-03-23 01:21:48.490283 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-23 01:21:48.490290 | orchestrator | Monday 23 March 2026 01:19:14 +0000 (0:00:03.821) 0:00:03.926 ********** 2026-03-23 01:21:48.490295 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490301 | orchestrator | 2026-03-23 01:21:48.490307 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-23 01:21:48.490313 | orchestrator | Monday 23 March 2026 01:19:19 +0000 (0:00:04.211) 0:00:08.138 ********** 2026-03-23 01:21:48.490319 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490340 | orchestrator | 2026-03-23 01:21:48.490347 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-23 01:21:48.490361 | orchestrator | Monday 23 March 2026 01:19:24 +0000 (0:00:05.921) 0:00:14.059 ********** 2026-03-23 01:21:48.490367 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490373 | orchestrator | 2026-03-23 01:21:48.490380 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-23 01:21:48.490387 | orchestrator | Monday 23 March 2026 01:19:28 +0000 (0:00:03.298) 0:00:17.358 ********** 2026-03-23 01:21:48.490392 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490396 | orchestrator | 2026-03-23 01:21:48.490400 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-23 01:21:48.490404 | orchestrator | Monday 23 March 2026 01:19:32 +0000 (0:00:04.041) 0:00:21.400 ********** 2026-03-23 01:21:48.490410 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-23 01:21:48.490419 | orchestrator | changed: [localhost] => (item=member) 2026-03-23 01:21:48.490430 | orchestrator | changed: [localhost] => (item=creator) 2026-03-23 01:21:48.490435 | orchestrator | 2026-03-23 01:21:48.490441 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-23 01:21:48.490447 | orchestrator | Monday 23 March 2026 01:19:43 +0000 (0:00:11.529) 0:00:32.929 ********** 2026-03-23 01:21:48.490569 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490577 | orchestrator | 2026-03-23 01:21:48.490581 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-23 01:21:48.490585 | orchestrator | Monday 23 March 2026 01:19:48 +0000 (0:00:04.625) 0:00:37.555 ********** 2026-03-23 01:21:48.490589 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490614 | orchestrator | 2026-03-23 01:21:48.490618 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-23 01:21:48.490622 | orchestrator | Monday 23 March 2026 01:19:53 +0000 (0:00:04.734) 0:00:42.290 ********** 2026-03-23 01:21:48.490626 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490630 | orchestrator | 2026-03-23 01:21:48.490634 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-23 01:21:48.490638 | orchestrator | Monday 23 March 2026 01:19:57 +0000 (0:00:04.159) 0:00:46.450 ********** 2026-03-23 01:21:48.490642 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490646 | orchestrator | 2026-03-23 01:21:48.490650 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-23 01:21:48.490653 | orchestrator | Monday 23 March 2026 01:20:01 +0000 (0:00:03.795) 0:00:50.245 ********** 2026-03-23 01:21:48.490657 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490661 | orchestrator | 2026-03-23 01:21:48.490665 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-23 01:21:48.490669 | orchestrator | Monday 23 March 2026 01:20:05 +0000 (0:00:04.026) 0:00:54.271 ********** 2026-03-23 01:21:48.490672 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490676 | orchestrator | 2026-03-23 01:21:48.490680 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-23 01:21:48.490684 | orchestrator | Monday 23 March 2026 01:20:09 +0000 (0:00:03.871) 0:00:58.142 ********** 2026-03-23 01:21:48.490687 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490691 | orchestrator | 2026-03-23 01:21:48.490695 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-23 01:21:48.490698 | orchestrator | Monday 23 March 2026 01:20:13 +0000 (0:00:04.719) 0:01:02.862 ********** 2026-03-23 01:21:48.490702 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490706 | orchestrator | 2026-03-23 01:21:48.490710 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-23 01:21:48.490714 | orchestrator | Monday 23 March 2026 01:20:18 +0000 (0:00:04.942) 0:01:07.805 ********** 2026-03-23 01:21:48.490719 | orchestrator | changed: [localhost] 2026-03-23 01:21:48.490724 | orchestrator | 2026-03-23 01:21:48.490728 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-23 01:21:48.490733 | orchestrator | 2026-03-23 01:21:48.490737 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-23 01:21:48.490741 | orchestrator | Monday 23 March 2026 01:20:30 +0000 (0:00:11.650) 0:01:19.456 ********** 2026-03-23 01:21:48.490746 | orchestrator | ok: [localhost] 2026-03-23 01:21:48.490750 | orchestrator | 2026-03-23 01:21:48.490755 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-23 01:21:48.490759 | orchestrator | Monday 23 March 2026 01:20:33 +0000 (0:00:03.230) 0:01:22.686 ********** 2026-03-23 01:21:48.490763 | orchestrator | skipping: [localhost] 2026-03-23 01:21:48.490768 | orchestrator | 2026-03-23 01:21:48.490772 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-23 01:21:48.490776 | orchestrator | Monday 23 March 2026 01:20:33 +0000 (0:00:00.057) 0:01:22.744 ********** 2026-03-23 01:21:48.490781 | orchestrator | skipping: [localhost] 2026-03-23 01:21:48.490785 | orchestrator | 2026-03-23 01:21:48.490789 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-23 01:21:48.490793 | orchestrator | Monday 23 March 2026 01:20:33 +0000 (0:00:00.071) 0:01:22.815 ********** 2026-03-23 01:21:48.490798 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-23 01:21:48.490802 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-23 01:21:48.490833 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-23 01:21:48.490844 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-23 01:21:48.490849 | orchestrator | skipping: [localhost] => (item=test)  2026-03-23 01:21:48.490853 | orchestrator | skipping: [localhost] 2026-03-23 01:21:48.490858 | orchestrator | 2026-03-23 01:21:48.490877 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-23 01:21:48.490886 | orchestrator | Monday 23 March 2026 01:20:33 +0000 (0:00:00.149) 0:01:22.965 ********** 2026-03-23 01:21:48.490890 | orchestrator | skipping: [localhost] 2026-03-23 01:21:48.490895 | orchestrator | 2026-03-23 01:21:48.490899 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-23 01:21:48.490903 | orchestrator | Monday 23 March 2026 01:20:33 +0000 (0:00:00.134) 0:01:23.099 ********** 2026-03-23 01:21:48.490908 | orchestrator | changed: [localhost] => (item=test) 2026-03-23 01:21:48.490912 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-23 01:21:48.490917 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-23 01:21:48.490921 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-23 01:21:48.490925 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-23 01:21:48.490929 | orchestrator | 2026-03-23 01:21:48.490933 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-23 01:21:48.490938 | orchestrator | Monday 23 March 2026 01:20:38 +0000 (0:00:04.059) 0:01:27.159 ********** 2026-03-23 01:21:48.490942 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-23 01:21:48.490948 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-23 01:21:48.490952 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-23 01:21:48.490956 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-23 01:21:48.490962 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j996495545406.2702', 'results_file': '/ansible/.ansible_async/j996495545406.2702', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-23 01:21:48.490969 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-23 01:21:48.490976 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j461240023406.2727', 'results_file': '/ansible/.ansible_async/j461240023406.2727', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-23 01:21:48.490981 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j199038609442.2752', 'results_file': '/ansible/.ansible_async/j199038609442.2752', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-23 01:21:48.490986 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j778538382954.2777', 'results_file': '/ansible/.ansible_async/j778538382954.2777', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-23 01:21:48.490990 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j322441518065.2802', 'results_file': '/ansible/.ansible_async/j322441518065.2802', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-23 01:21:48.490995 | orchestrator | 2026-03-23 01:21:48.490999 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-23 01:21:48.491004 | orchestrator | Monday 23 March 2026 01:21:35 +0000 (0:00:57.504) 0:02:24.663 ********** 2026-03-23 01:21:48.491008 | orchestrator | changed: [localhost] => (item=test) 2026-03-23 01:21:48.491012 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-23 01:21:48.491017 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-23 01:21:48.491022 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-23 01:21:48.491028 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-23 01:21:48.491034 | orchestrator | 2026-03-23 01:21:48.491040 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-23 01:21:48.491049 | orchestrator | Monday 23 March 2026 01:21:39 +0000 (0:00:04.381) 0:02:29.045 ********** 2026-03-23 01:21:48.491058 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-23 01:21:48.491070 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j974983967342.2914', 'results_file': '/ansible/.ansible_async/j974983967342.2914', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-23 01:21:48.491077 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j574408603333.2939', 'results_file': '/ansible/.ansible_async/j574408603333.2939', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-23 01:21:48.491083 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j708964039490.2964', 'results_file': '/ansible/.ansible_async/j708964039490.2964', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-23 01:21:48.491096 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j661593371877.2989', 'results_file': '/ansible/.ansible_async/j661593371877.2989', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-23 01:22:29.033483 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j58604800046.3014', 'results_file': '/ansible/.ansible_async/j58604800046.3014', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-23 01:22:29.033551 | orchestrator | 2026-03-23 01:22:29.033561 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-23 01:22:29.033570 | orchestrator | Monday 23 March 2026 01:21:49 +0000 (0:00:09.298) 0:02:38.343 ********** 2026-03-23 01:22:29.033606 | orchestrator | changed: [localhost] => (item=test) 2026-03-23 01:22:29.033613 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-23 01:22:29.033619 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-23 01:22:29.033625 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-23 01:22:29.033631 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-23 01:22:29.033637 | orchestrator | 2026-03-23 01:22:29.033644 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-23 01:22:29.033650 | orchestrator | Monday 23 March 2026 01:21:53 +0000 (0:00:04.541) 0:02:42.885 ********** 2026-03-23 01:22:29.033658 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-23 01:22:29.033664 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j646169857189.3083', 'results_file': '/ansible/.ansible_async/j646169857189.3083', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-23 01:22:29.033671 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j28605578559.3108', 'results_file': '/ansible/.ansible_async/j28605578559.3108', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-23 01:22:29.033687 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j786264506695.3134', 'results_file': '/ansible/.ansible_async/j786264506695.3134', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-23 01:22:29.033694 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j207073527791.3160', 'results_file': '/ansible/.ansible_async/j207073527791.3160', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-23 01:22:29.033701 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j548205540743.3186', 'results_file': '/ansible/.ansible_async/j548205540743.3186', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-23 01:22:29.033708 | orchestrator | 2026-03-23 01:22:29.033715 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-23 01:22:29.033724 | orchestrator | Monday 23 March 2026 01:22:03 +0000 (0:00:09.929) 0:02:52.815 ********** 2026-03-23 01:22:29.033745 | orchestrator | changed: [localhost] 2026-03-23 01:22:29.033752 | orchestrator | 2026-03-23 01:22:29.033759 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-23 01:22:29.033766 | orchestrator | Monday 23 March 2026 01:22:10 +0000 (0:00:06.578) 0:02:59.394 ********** 2026-03-23 01:22:29.033772 | orchestrator | changed: [localhost] 2026-03-23 01:22:29.033778 | orchestrator | 2026-03-23 01:22:29.033785 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-23 01:22:29.033792 | orchestrator | Monday 23 March 2026 01:22:23 +0000 (0:00:13.572) 0:03:12.966 ********** 2026-03-23 01:22:29.033799 | orchestrator | ok: [localhost] 2026-03-23 01:22:29.033805 | orchestrator | 2026-03-23 01:22:29.033812 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-23 01:22:29.033819 | orchestrator | Monday 23 March 2026 01:22:28 +0000 (0:00:04.957) 0:03:17.924 ********** 2026-03-23 01:22:29.033825 | orchestrator | ok: [localhost] => { 2026-03-23 01:22:29.033834 | orchestrator |  "msg": "192.168.112.197" 2026-03-23 01:22:29.033843 | orchestrator | } 2026-03-23 01:22:29.033852 | orchestrator | 2026-03-23 01:22:29.033861 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:22:29.033870 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-23 01:22:29.033879 | orchestrator | 2026-03-23 01:22:29.033885 | orchestrator | 2026-03-23 01:22:29.033891 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:22:29.033897 | orchestrator | Monday 23 March 2026 01:22:28 +0000 (0:00:00.053) 0:03:17.978 ********** 2026-03-23 01:22:29.033903 | orchestrator | =============================================================================== 2026-03-23 01:22:29.033908 | orchestrator | Wait for instance creation to complete --------------------------------- 57.50s 2026-03-23 01:22:29.033914 | orchestrator | Attach test volume ----------------------------------------------------- 13.57s 2026-03-23 01:22:29.033920 | orchestrator | Create test router ----------------------------------------------------- 11.65s 2026-03-23 01:22:29.033926 | orchestrator | Add member roles to user test ------------------------------------------ 11.53s 2026-03-23 01:22:29.033931 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.93s 2026-03-23 01:22:29.033937 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.30s 2026-03-23 01:22:29.033943 | orchestrator | Create test volume ------------------------------------------------------ 6.58s 2026-03-23 01:22:29.033965 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.92s 2026-03-23 01:22:29.033972 | orchestrator | Create floating ip address ---------------------------------------------- 4.96s 2026-03-23 01:22:29.033978 | orchestrator | Create test subnet ------------------------------------------------------ 4.94s 2026-03-23 01:22:29.033984 | orchestrator | Create ssh security group ----------------------------------------------- 4.73s 2026-03-23 01:22:29.033990 | orchestrator | Create test network ----------------------------------------------------- 4.72s 2026-03-23 01:22:29.033996 | orchestrator | Create test server group ------------------------------------------------ 4.63s 2026-03-23 01:22:29.034002 | orchestrator | Add tag to instances ---------------------------------------------------- 4.54s 2026-03-23 01:22:29.034008 | orchestrator | Add metadata to instances ----------------------------------------------- 4.38s 2026-03-23 01:22:29.034046 | orchestrator | Create test-admin user -------------------------------------------------- 4.21s 2026-03-23 01:22:29.034053 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.16s 2026-03-23 01:22:29.034060 | orchestrator | Create test instances --------------------------------------------------- 4.06s 2026-03-23 01:22:29.034066 | orchestrator | Create test user -------------------------------------------------------- 4.04s 2026-03-23 01:22:29.034073 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.03s 2026-03-23 01:22:29.213300 | orchestrator | + server_list 2026-03-23 01:22:29.213345 | orchestrator | + openstack --os-cloud test server list 2026-03-23 01:22:32.642401 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-23 01:22:32.642545 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-23 01:22:32.642554 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-23 01:22:32.642663 | orchestrator | | b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 | test-4 | ACTIVE | test=192.168.112.145, 192.168.200.30 | N/A (booted from volume) | SCS-1L-1 | 2026-03-23 01:22:32.642675 | orchestrator | | 2e3a3ae5-0b02-41f8-bb1f-dfad340002da | test-3 | ACTIVE | test=192.168.112.182, 192.168.200.60 | N/A (booted from volume) | SCS-1L-1 | 2026-03-23 01:22:32.642681 | orchestrator | | 3cb8204c-2351-4006-bb50-b26c97b2873a | test-1 | ACTIVE | test=192.168.112.188, 192.168.200.89 | N/A (booted from volume) | SCS-1L-1 | 2026-03-23 01:22:32.642687 | orchestrator | | 54f7dcdb-6157-4929-a767-a148c7cd7c17 | test-2 | ACTIVE | test=192.168.112.192, 192.168.200.181 | N/A (booted from volume) | SCS-1L-1 | 2026-03-23 01:22:32.642693 | orchestrator | | fb422083-6af7-42f2-b3b4-5b1430583079 | test | ACTIVE | test=192.168.112.197, 192.168.200.119 | N/A (booted from volume) | SCS-1L-1 | 2026-03-23 01:22:32.642699 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-23 01:22:32.793815 | orchestrator | + openstack --os-cloud test server show test 2026-03-23 01:22:35.969588 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:35.969836 | orchestrator | | Field | Value | 2026-03-23 01:22:35.969845 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:35.969850 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-23 01:22:35.969854 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-23 01:22:35.969859 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-23 01:22:35.969875 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-23 01:22:35.969884 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-23 01:22:35.969888 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-23 01:22:35.969904 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-23 01:22:35.969909 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-23 01:22:35.969912 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-23 01:22:35.969916 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-23 01:22:35.969920 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-23 01:22:35.969924 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-23 01:22:35.969932 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-23 01:22:35.969936 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-23 01:22:35.969940 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-23 01:22:35.969944 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-23T01:21:08.000000 | 2026-03-23 01:22:35.969952 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-23 01:22:35.969956 | orchestrator | | accessIPv4 | | 2026-03-23 01:22:35.969960 | orchestrator | | accessIPv6 | | 2026-03-23 01:22:35.969964 | orchestrator | | addresses | test=192.168.112.197, 192.168.200.119 | 2026-03-23 01:22:35.969970 | orchestrator | | config_drive | | 2026-03-23 01:22:35.969981 | orchestrator | | created | 2026-03-23T01:20:42Z | 2026-03-23 01:22:35.969999 | orchestrator | | description | None | 2026-03-23 01:22:35.970005 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-23 01:22:35.970083 | orchestrator | | hostId | b4a6d1543cdb7696400c3c341ddc5766e6a7687a28ea933d432d313f | 2026-03-23 01:22:35.970093 | orchestrator | | host_status | None | 2026-03-23 01:22:35.970105 | orchestrator | | id | fb422083-6af7-42f2-b3b4-5b1430583079 | 2026-03-23 01:22:35.970110 | orchestrator | | image | N/A (booted from volume) | 2026-03-23 01:22:35.970114 | orchestrator | | key_name | test | 2026-03-23 01:22:35.970119 | orchestrator | | locked | False | 2026-03-23 01:22:35.970124 | orchestrator | | locked_reason | None | 2026-03-23 01:22:35.970133 | orchestrator | | name | test | 2026-03-23 01:22:35.970138 | orchestrator | | pinned_availability_zone | None | 2026-03-23 01:22:35.970163 | orchestrator | | progress | 0 | 2026-03-23 01:22:35.970172 | orchestrator | | project_id | 9d24ed7498264fde8d3f8f3d51ff80a9 | 2026-03-23 01:22:35.970178 | orchestrator | | properties | hostname='test' | 2026-03-23 01:22:35.970188 | orchestrator | | security_groups | name='icmp' | 2026-03-23 01:22:35.970194 | orchestrator | | | name='ssh' | 2026-03-23 01:22:35.970200 | orchestrator | | server_groups | None | 2026-03-23 01:22:35.970206 | orchestrator | | status | ACTIVE | 2026-03-23 01:22:35.970215 | orchestrator | | tags | test | 2026-03-23 01:22:35.970221 | orchestrator | | trusted_image_certificates | None | 2026-03-23 01:22:35.970226 | orchestrator | | updated | 2026-03-23T01:21:41Z | 2026-03-23 01:22:35.970231 | orchestrator | | user_id | 845dfe88ba0742feba44f98bf822dcb4 | 2026-03-23 01:22:35.970246 | orchestrator | | volumes_attached | delete_on_termination='True', id='bd705b69-f4cd-40a7-ad11-e3e5af4b4e90' | 2026-03-23 01:22:35.970255 | orchestrator | | | delete_on_termination='False', id='0c8290a0-d22e-40e8-9468-59f706bdda48' | 2026-03-23 01:22:35.973069 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:36.200118 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-23 01:22:38.799043 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:38.799092 | orchestrator | | Field | Value | 2026-03-23 01:22:38.799106 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:38.799110 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-23 01:22:38.799113 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-23 01:22:38.799116 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-23 01:22:38.799120 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-23 01:22:38.799127 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-23 01:22:38.799130 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-23 01:22:38.799140 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-23 01:22:38.799143 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-23 01:22:38.799149 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-23 01:22:38.799152 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-23 01:22:38.799156 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-23 01:22:38.799159 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-23 01:22:38.799162 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-23 01:22:38.799165 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-23 01:22:38.799170 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-23 01:22:38.799174 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-23T01:21:10.000000 | 2026-03-23 01:22:38.799179 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-23 01:22:38.799183 | orchestrator | | accessIPv4 | | 2026-03-23 01:22:38.799188 | orchestrator | | accessIPv6 | | 2026-03-23 01:22:38.799191 | orchestrator | | addresses | test=192.168.112.188, 192.168.200.89 | 2026-03-23 01:22:38.799195 | orchestrator | | config_drive | | 2026-03-23 01:22:38.799198 | orchestrator | | created | 2026-03-23T01:20:43Z | 2026-03-23 01:22:38.799201 | orchestrator | | description | None | 2026-03-23 01:22:38.799204 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-23 01:22:38.799208 | orchestrator | | hostId | b4a6d1543cdb7696400c3c341ddc5766e6a7687a28ea933d432d313f | 2026-03-23 01:22:38.799211 | orchestrator | | host_status | None | 2026-03-23 01:22:38.799217 | orchestrator | | id | 3cb8204c-2351-4006-bb50-b26c97b2873a | 2026-03-23 01:22:38.799225 | orchestrator | | image | N/A (booted from volume) | 2026-03-23 01:22:38.799228 | orchestrator | | key_name | test | 2026-03-23 01:22:38.799232 | orchestrator | | locked | False | 2026-03-23 01:22:38.799235 | orchestrator | | locked_reason | None | 2026-03-23 01:22:38.799238 | orchestrator | | name | test-1 | 2026-03-23 01:22:38.799241 | orchestrator | | pinned_availability_zone | None | 2026-03-23 01:22:38.799246 | orchestrator | | progress | 0 | 2026-03-23 01:22:38.799249 | orchestrator | | project_id | 9d24ed7498264fde8d3f8f3d51ff80a9 | 2026-03-23 01:22:38.799253 | orchestrator | | properties | hostname='test-1' | 2026-03-23 01:22:38.799262 | orchestrator | | security_groups | name='icmp' | 2026-03-23 01:22:38.799265 | orchestrator | | | name='ssh' | 2026-03-23 01:22:38.799269 | orchestrator | | server_groups | None | 2026-03-23 01:22:38.799272 | orchestrator | | status | ACTIVE | 2026-03-23 01:22:38.799283 | orchestrator | | tags | test | 2026-03-23 01:22:38.799290 | orchestrator | | trusted_image_certificates | None | 2026-03-23 01:22:38.799293 | orchestrator | | updated | 2026-03-23T01:21:41Z | 2026-03-23 01:22:38.799299 | orchestrator | | user_id | 845dfe88ba0742feba44f98bf822dcb4 | 2026-03-23 01:22:38.799302 | orchestrator | | volumes_attached | delete_on_termination='True', id='50b549bb-374f-4d59-a45c-21ad58557746' | 2026-03-23 01:22:38.803143 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:39.045897 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-23 01:22:41.723313 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:41.723359 | orchestrator | | Field | Value | 2026-03-23 01:22:41.723366 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:41.723371 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-23 01:22:41.723376 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-23 01:22:41.723380 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-23 01:22:41.723384 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-23 01:22:41.723408 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-23 01:22:41.723412 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-23 01:22:41.723434 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-23 01:22:41.723439 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-23 01:22:41.723443 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-23 01:22:41.723448 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-23 01:22:41.723452 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-23 01:22:41.723456 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-23 01:22:41.723461 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-23 01:22:41.723465 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-23 01:22:41.723472 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-23 01:22:41.723479 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-23T01:21:08.000000 | 2026-03-23 01:22:41.723486 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-23 01:22:41.723491 | orchestrator | | accessIPv4 | | 2026-03-23 01:22:41.723495 | orchestrator | | accessIPv6 | | 2026-03-23 01:22:41.723500 | orchestrator | | addresses | test=192.168.112.192, 192.168.200.181 | 2026-03-23 01:22:41.723504 | orchestrator | | config_drive | | 2026-03-23 01:22:41.723509 | orchestrator | | created | 2026-03-23T01:20:43Z | 2026-03-23 01:22:41.723513 | orchestrator | | description | None | 2026-03-23 01:22:41.723517 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-23 01:22:41.723527 | orchestrator | | hostId | b4a6d1543cdb7696400c3c341ddc5766e6a7687a28ea933d432d313f | 2026-03-23 01:22:41.723530 | orchestrator | | host_status | None | 2026-03-23 01:22:41.723537 | orchestrator | | id | 54f7dcdb-6157-4929-a767-a148c7cd7c17 | 2026-03-23 01:22:41.723541 | orchestrator | | image | N/A (booted from volume) | 2026-03-23 01:22:41.723545 | orchestrator | | key_name | test | 2026-03-23 01:22:41.723549 | orchestrator | | locked | False | 2026-03-23 01:22:41.723552 | orchestrator | | locked_reason | None | 2026-03-23 01:22:41.723556 | orchestrator | | name | test-2 | 2026-03-23 01:22:41.723560 | orchestrator | | pinned_availability_zone | None | 2026-03-23 01:22:41.723566 | orchestrator | | progress | 0 | 2026-03-23 01:22:41.723570 | orchestrator | | project_id | 9d24ed7498264fde8d3f8f3d51ff80a9 | 2026-03-23 01:22:41.723574 | orchestrator | | properties | hostname='test-2' | 2026-03-23 01:22:41.723580 | orchestrator | | security_groups | name='icmp' | 2026-03-23 01:22:41.723584 | orchestrator | | | name='ssh' | 2026-03-23 01:22:41.723588 | orchestrator | | server_groups | None | 2026-03-23 01:22:41.723592 | orchestrator | | status | ACTIVE | 2026-03-23 01:22:41.723595 | orchestrator | | tags | test | 2026-03-23 01:22:41.723602 | orchestrator | | trusted_image_certificates | None | 2026-03-23 01:22:41.723606 | orchestrator | | updated | 2026-03-23T01:21:42Z | 2026-03-23 01:22:41.723647 | orchestrator | | user_id | 845dfe88ba0742feba44f98bf822dcb4 | 2026-03-23 01:22:41.723653 | orchestrator | | volumes_attached | delete_on_termination='True', id='16835767-90e3-4103-8d5f-6e1629a52b74' | 2026-03-23 01:22:41.728238 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:41.965357 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-23 01:22:44.688005 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:44.688068 | orchestrator | | Field | Value | 2026-03-23 01:22:44.688077 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:44.688084 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-23 01:22:44.688091 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-23 01:22:44.688098 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-23 01:22:44.688117 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-23 01:22:44.688131 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-23 01:22:44.688138 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-23 01:22:44.688155 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-23 01:22:44.688162 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-23 01:22:44.688169 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-23 01:22:44.688175 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-23 01:22:44.688181 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-23 01:22:44.688187 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-23 01:22:44.688198 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-23 01:22:44.688219 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-23 01:22:44.688236 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-23 01:22:44.688243 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-23T01:21:10.000000 | 2026-03-23 01:22:44.688255 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-23 01:22:44.688261 | orchestrator | | accessIPv4 | | 2026-03-23 01:22:44.688269 | orchestrator | | accessIPv6 | | 2026-03-23 01:22:44.688276 | orchestrator | | addresses | test=192.168.112.182, 192.168.200.60 | 2026-03-23 01:22:44.688282 | orchestrator | | config_drive | | 2026-03-23 01:22:44.688293 | orchestrator | | created | 2026-03-23T01:20:43Z | 2026-03-23 01:22:44.688299 | orchestrator | | description | None | 2026-03-23 01:22:44.688305 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-23 01:22:44.688315 | orchestrator | | hostId | b4a6d1543cdb7696400c3c341ddc5766e6a7687a28ea933d432d313f | 2026-03-23 01:22:44.688322 | orchestrator | | host_status | None | 2026-03-23 01:22:44.688332 | orchestrator | | id | 2e3a3ae5-0b02-41f8-bb1f-dfad340002da | 2026-03-23 01:22:44.688337 | orchestrator | | image | N/A (booted from volume) | 2026-03-23 01:22:44.688341 | orchestrator | | key_name | test | 2026-03-23 01:22:44.688344 | orchestrator | | locked | False | 2026-03-23 01:22:44.688351 | orchestrator | | locked_reason | None | 2026-03-23 01:22:44.688355 | orchestrator | | name | test-3 | 2026-03-23 01:22:44.688359 | orchestrator | | pinned_availability_zone | None | 2026-03-23 01:22:44.688363 | orchestrator | | progress | 0 | 2026-03-23 01:22:44.688369 | orchestrator | | project_id | 9d24ed7498264fde8d3f8f3d51ff80a9 | 2026-03-23 01:22:44.688373 | orchestrator | | properties | hostname='test-3' | 2026-03-23 01:22:44.688380 | orchestrator | | security_groups | name='icmp' | 2026-03-23 01:22:44.688384 | orchestrator | | | name='ssh' | 2026-03-23 01:22:44.688388 | orchestrator | | server_groups | None | 2026-03-23 01:22:44.688391 | orchestrator | | status | ACTIVE | 2026-03-23 01:22:44.688398 | orchestrator | | tags | test | 2026-03-23 01:22:44.688402 | orchestrator | | trusted_image_certificates | None | 2026-03-23 01:22:44.688406 | orchestrator | | updated | 2026-03-23T01:21:42Z | 2026-03-23 01:22:44.688410 | orchestrator | | user_id | 845dfe88ba0742feba44f98bf822dcb4 | 2026-03-23 01:22:44.688416 | orchestrator | | volumes_attached | delete_on_termination='True', id='f0dccc3e-4166-4d75-94d6-8074eb7d1cab' | 2026-03-23 01:22:44.692826 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:44.981322 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-23 01:22:47.929043 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:47.929149 | orchestrator | | Field | Value | 2026-03-23 01:22:47.929162 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:47.929191 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-23 01:22:47.929199 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-23 01:22:47.929206 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-23 01:22:47.929213 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-23 01:22:47.929219 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-23 01:22:47.929226 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-23 01:22:47.929249 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-23 01:22:47.929256 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-23 01:22:47.929262 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-23 01:22:47.929275 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-23 01:22:47.929282 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-23 01:22:47.929288 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-23 01:22:47.929295 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-23 01:22:47.929311 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-23 01:22:47.929756 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-23 01:22:47.929796 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-23T01:21:10.000000 | 2026-03-23 01:22:47.929817 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-23 01:22:47.929825 | orchestrator | | accessIPv4 | | 2026-03-23 01:22:47.929841 | orchestrator | | accessIPv6 | | 2026-03-23 01:22:47.929847 | orchestrator | | addresses | test=192.168.112.145, 192.168.200.30 | 2026-03-23 01:22:47.929854 | orchestrator | | config_drive | | 2026-03-23 01:22:47.929861 | orchestrator | | created | 2026-03-23T01:20:44Z | 2026-03-23 01:22:47.929870 | orchestrator | | description | None | 2026-03-23 01:22:47.929876 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-23 01:22:47.929882 | orchestrator | | hostId | 150e2ec6aecaa90448b4f39e1cf7b0d4b1ec41e0035c385095695d82 | 2026-03-23 01:22:47.929888 | orchestrator | | host_status | None | 2026-03-23 01:22:47.929900 | orchestrator | | id | b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 | 2026-03-23 01:22:47.929911 | orchestrator | | image | N/A (booted from volume) | 2026-03-23 01:22:47.929917 | orchestrator | | key_name | test | 2026-03-23 01:22:47.929924 | orchestrator | | locked | False | 2026-03-23 01:22:47.929930 | orchestrator | | locked_reason | None | 2026-03-23 01:22:47.929937 | orchestrator | | name | test-4 | 2026-03-23 01:22:47.929946 | orchestrator | | pinned_availability_zone | None | 2026-03-23 01:22:47.929952 | orchestrator | | progress | 0 | 2026-03-23 01:22:47.929958 | orchestrator | | project_id | 9d24ed7498264fde8d3f8f3d51ff80a9 | 2026-03-23 01:22:47.929965 | orchestrator | | properties | hostname='test-4' | 2026-03-23 01:22:47.929976 | orchestrator | | security_groups | name='icmp' | 2026-03-23 01:22:47.929998 | orchestrator | | | name='ssh' | 2026-03-23 01:22:47.930005 | orchestrator | | server_groups | None | 2026-03-23 01:22:47.930010 | orchestrator | | status | ACTIVE | 2026-03-23 01:22:47.930108 | orchestrator | | tags | test | 2026-03-23 01:22:47.930114 | orchestrator | | trusted_image_certificates | None | 2026-03-23 01:22:47.930122 | orchestrator | | updated | 2026-03-23T01:21:43Z | 2026-03-23 01:22:47.930127 | orchestrator | | user_id | 845dfe88ba0742feba44f98bf822dcb4 | 2026-03-23 01:22:47.930131 | orchestrator | | volumes_attached | delete_on_termination='True', id='a2c83a42-55ab-47f4-a045-d1569ecfdb74' | 2026-03-23 01:22:47.933283 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-23 01:22:48.171916 | orchestrator | + server_ping 2026-03-23 01:22:48.173267 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-23 01:22:48.173341 | orchestrator | ++ tr -d '\r' 2026-03-23 01:22:50.766875 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:22:50.766991 | orchestrator | + ping -c3 192.168.112.182 2026-03-23 01:22:50.782312 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-03-23 01:22:50.782392 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=8.43 ms 2026-03-23 01:22:51.776843 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=1.41 ms 2026-03-23 01:22:52.777863 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.12 ms 2026-03-23 01:22:52.777914 | orchestrator | 2026-03-23 01:22:52.777921 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-03-23 01:22:52.777928 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:22:52.777933 | orchestrator | rtt min/avg/max/mdev = 1.117/3.650/8.427/3.379 ms 2026-03-23 01:22:52.777939 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:22:52.777944 | orchestrator | + ping -c3 192.168.112.145 2026-03-23 01:22:52.785071 | orchestrator | PING 192.168.112.145 (192.168.112.145) 56(84) bytes of data. 2026-03-23 01:22:52.785112 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=1 ttl=63 time=3.00 ms 2026-03-23 01:22:53.784951 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=2 ttl=63 time=1.41 ms 2026-03-23 01:22:54.786388 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=3 ttl=63 time=1.35 ms 2026-03-23 01:22:54.786434 | orchestrator | 2026-03-23 01:22:54.786439 | orchestrator | --- 192.168.112.145 ping statistics --- 2026-03-23 01:22:54.786443 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-23 01:22:54.786447 | orchestrator | rtt min/avg/max/mdev = 1.347/1.918/2.997/0.763 ms 2026-03-23 01:22:54.786457 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:22:54.786461 | orchestrator | + ping -c3 192.168.112.197 2026-03-23 01:22:54.794398 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-03-23 01:22:54.794449 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=3.31 ms 2026-03-23 01:22:55.796360 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=1.73 ms 2026-03-23 01:22:56.796496 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.35 ms 2026-03-23 01:22:56.796550 | orchestrator | 2026-03-23 01:22:56.796559 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-03-23 01:22:56.796567 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:22:56.796574 | orchestrator | rtt min/avg/max/mdev = 1.349/2.128/3.306/0.847 ms 2026-03-23 01:22:56.797227 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:22:56.797254 | orchestrator | + ping -c3 192.168.112.188 2026-03-23 01:22:56.805382 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-03-23 01:22:56.805430 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=4.35 ms 2026-03-23 01:22:57.804400 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=1.61 ms 2026-03-23 01:22:58.806183 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.29 ms 2026-03-23 01:22:58.806246 | orchestrator | 2026-03-23 01:22:58.806260 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-03-23 01:22:58.806272 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:22:58.806279 | orchestrator | rtt min/avg/max/mdev = 1.286/2.414/4.352/1.376 ms 2026-03-23 01:22:58.807102 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:22:58.807136 | orchestrator | + ping -c3 192.168.112.192 2026-03-23 01:22:58.815510 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2026-03-23 01:22:58.815571 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=4.25 ms 2026-03-23 01:22:59.814898 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.03 ms 2026-03-23 01:23:00.816608 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.80 ms 2026-03-23 01:23:00.817517 | orchestrator | 2026-03-23 01:23:00.817567 | orchestrator | --- 192.168.112.192 ping statistics --- 2026-03-23 01:23:00.817574 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:23:00.817580 | orchestrator | rtt min/avg/max/mdev = 1.799/2.694/4.254/1.107 ms 2026-03-23 01:23:00.817599 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-23 01:23:00.817607 | orchestrator | + compute_list 2026-03-23 01:23:00.817617 | orchestrator | + osism manage compute list testbed-node-3 2026-03-23 01:23:02.371233 | orchestrator | 2026-03-23 01:23:02 | ERROR  | Unable to get ansible vault password 2026-03-23 01:23:02.371304 | orchestrator | 2026-03-23 01:23:02 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:23:02.371312 | orchestrator | 2026-03-23 01:23:02 | ERROR  | Dropping encrypted entries 2026-03-23 01:23:05.467915 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:23:05.467970 | orchestrator | | ID | Name | Status | 2026-03-23 01:23:05.467978 | orchestrator | |--------------------------------------+--------+----------| 2026-03-23 01:23:05.467983 | orchestrator | | b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 | test-4 | ACTIVE | 2026-03-23 01:23:05.467987 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:23:05.659205 | orchestrator | + osism manage compute list testbed-node-4 2026-03-23 01:23:06.994981 | orchestrator | 2026-03-23 01:23:06 | ERROR  | Unable to get ansible vault password 2026-03-23 01:23:06.995037 | orchestrator | 2026-03-23 01:23:06 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:23:06.996057 | orchestrator | 2026-03-23 01:23:06 | ERROR  | Dropping encrypted entries 2026-03-23 01:23:08.042826 | orchestrator | +------+--------+----------+ 2026-03-23 01:23:08.042887 | orchestrator | | ID | Name | Status | 2026-03-23 01:23:08.042894 | orchestrator | |------+--------+----------| 2026-03-23 01:23:08.042901 | orchestrator | +------+--------+----------+ 2026-03-23 01:23:08.243036 | orchestrator | + osism manage compute list testbed-node-5 2026-03-23 01:23:09.624322 | orchestrator | 2026-03-23 01:23:09 | ERROR  | Unable to get ansible vault password 2026-03-23 01:23:09.624378 | orchestrator | 2026-03-23 01:23:09 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:23:09.624387 | orchestrator | 2026-03-23 01:23:09 | ERROR  | Dropping encrypted entries 2026-03-23 01:23:10.992231 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:23:10.992292 | orchestrator | | ID | Name | Status | 2026-03-23 01:23:10.992302 | orchestrator | |--------------------------------------+--------+----------| 2026-03-23 01:23:10.992308 | orchestrator | | 2e3a3ae5-0b02-41f8-bb1f-dfad340002da | test-3 | ACTIVE | 2026-03-23 01:23:10.992314 | orchestrator | | 3cb8204c-2351-4006-bb50-b26c97b2873a | test-1 | ACTIVE | 2026-03-23 01:23:10.992322 | orchestrator | | 54f7dcdb-6157-4929-a767-a148c7cd7c17 | test-2 | ACTIVE | 2026-03-23 01:23:10.992328 | orchestrator | | fb422083-6af7-42f2-b3b4-5b1430583079 | test | ACTIVE | 2026-03-23 01:23:10.992335 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:23:11.188267 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-03-23 01:23:12.656636 | orchestrator | 2026-03-23 01:23:12 | ERROR  | Unable to get ansible vault password 2026-03-23 01:23:12.657178 | orchestrator | 2026-03-23 01:23:12 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:23:12.657220 | orchestrator | 2026-03-23 01:23:12 | ERROR  | Dropping encrypted entries 2026-03-23 01:23:13.761447 | orchestrator | 2026-03-23 01:23:13 | INFO  | No migratable instances found on node testbed-node-4 2026-03-23 01:23:14.050447 | orchestrator | + compute_list 2026-03-23 01:23:14.050504 | orchestrator | + osism manage compute list testbed-node-3 2026-03-23 01:23:15.649295 | orchestrator | 2026-03-23 01:23:15 | ERROR  | Unable to get ansible vault password 2026-03-23 01:23:15.649370 | orchestrator | 2026-03-23 01:23:15 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:23:15.649383 | orchestrator | 2026-03-23 01:23:15 | ERROR  | Dropping encrypted entries 2026-03-23 01:23:16.911867 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:23:16.911918 | orchestrator | | ID | Name | Status | 2026-03-23 01:23:16.911923 | orchestrator | |--------------------------------------+--------+----------| 2026-03-23 01:23:16.911927 | orchestrator | | b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 | test-4 | ACTIVE | 2026-03-23 01:23:16.911931 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:23:17.186163 | orchestrator | + osism manage compute list testbed-node-4 2026-03-23 01:23:18.687811 | orchestrator | 2026-03-23 01:23:18 | ERROR  | Unable to get ansible vault password 2026-03-23 01:23:18.687883 | orchestrator | 2026-03-23 01:23:18 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:23:18.687891 | orchestrator | 2026-03-23 01:23:18 | ERROR  | Dropping encrypted entries 2026-03-23 01:23:19.550921 | orchestrator | +------+--------+----------+ 2026-03-23 01:23:19.550976 | orchestrator | | ID | Name | Status | 2026-03-23 01:23:19.550982 | orchestrator | |------+--------+----------| 2026-03-23 01:23:19.550987 | orchestrator | +------+--------+----------+ 2026-03-23 01:23:19.838393 | orchestrator | + osism manage compute list testbed-node-5 2026-03-23 01:23:21.398179 | orchestrator | 2026-03-23 01:23:21 | ERROR  | Unable to get ansible vault password 2026-03-23 01:23:21.398236 | orchestrator | 2026-03-23 01:23:21 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:23:21.398244 | orchestrator | 2026-03-23 01:23:21 | ERROR  | Dropping encrypted entries 2026-03-23 01:23:22.804141 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:23:22.804194 | orchestrator | | ID | Name | Status | 2026-03-23 01:23:22.804202 | orchestrator | |--------------------------------------+--------+----------| 2026-03-23 01:23:22.804208 | orchestrator | | 2e3a3ae5-0b02-41f8-bb1f-dfad340002da | test-3 | ACTIVE | 2026-03-23 01:23:22.804213 | orchestrator | | 3cb8204c-2351-4006-bb50-b26c97b2873a | test-1 | ACTIVE | 2026-03-23 01:23:22.804219 | orchestrator | | 54f7dcdb-6157-4929-a767-a148c7cd7c17 | test-2 | ACTIVE | 2026-03-23 01:23:22.804224 | orchestrator | | fb422083-6af7-42f2-b3b4-5b1430583079 | test | ACTIVE | 2026-03-23 01:23:22.804229 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:23:23.090884 | orchestrator | + server_ping 2026-03-23 01:23:23.092239 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-23 01:23:23.092280 | orchestrator | ++ tr -d '\r' 2026-03-23 01:23:25.807149 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:23:25.807219 | orchestrator | + ping -c3 192.168.112.182 2026-03-23 01:23:25.815144 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-03-23 01:23:25.815195 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=6.40 ms 2026-03-23 01:23:26.811991 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=1.63 ms 2026-03-23 01:23:27.813812 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.30 ms 2026-03-23 01:23:27.814348 | orchestrator | 2026-03-23 01:23:27.814391 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-03-23 01:23:27.814399 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-23 01:23:27.814406 | orchestrator | rtt min/avg/max/mdev = 1.296/3.108/6.400/2.331 ms 2026-03-23 01:23:27.815358 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:23:27.815396 | orchestrator | + ping -c3 192.168.112.145 2026-03-23 01:23:27.824163 | orchestrator | PING 192.168.112.145 (192.168.112.145) 56(84) bytes of data. 2026-03-23 01:23:27.824219 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=1 ttl=63 time=4.88 ms 2026-03-23 01:23:28.822646 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=2 ttl=63 time=1.41 ms 2026-03-23 01:23:29.823183 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=3 ttl=63 time=1.31 ms 2026-03-23 01:23:29.823244 | orchestrator | 2026-03-23 01:23:29.823254 | orchestrator | --- 192.168.112.145 ping statistics --- 2026-03-23 01:23:29.823261 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-23 01:23:29.823268 | orchestrator | rtt min/avg/max/mdev = 1.310/2.534/4.882/1.660 ms 2026-03-23 01:23:29.823274 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:23:29.823282 | orchestrator | + ping -c3 192.168.112.197 2026-03-23 01:23:29.830999 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-03-23 01:23:29.831061 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=3.26 ms 2026-03-23 01:23:30.830474 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=1.26 ms 2026-03-23 01:23:31.832616 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.41 ms 2026-03-23 01:23:31.832692 | orchestrator | 2026-03-23 01:23:31.832848 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-03-23 01:23:31.832863 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:23:31.832869 | orchestrator | rtt min/avg/max/mdev = 1.264/1.976/3.255/0.905 ms 2026-03-23 01:23:31.832883 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:23:31.832982 | orchestrator | + ping -c3 192.168.112.188 2026-03-23 01:23:31.843216 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-03-23 01:23:31.843273 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=5.46 ms 2026-03-23 01:23:32.840788 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=1.43 ms 2026-03-23 01:23:33.842618 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.04 ms 2026-03-23 01:23:33.842682 | orchestrator | 2026-03-23 01:23:33.842696 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-03-23 01:23:33.842705 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:23:33.842713 | orchestrator | rtt min/avg/max/mdev = 1.042/2.642/5.459/1.997 ms 2026-03-23 01:23:33.842732 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:23:33.842766 | orchestrator | + ping -c3 192.168.112.192 2026-03-23 01:23:33.851257 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2026-03-23 01:23:33.851322 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=3.58 ms 2026-03-23 01:23:34.850084 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=1.32 ms 2026-03-23 01:23:35.851208 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.16 ms 2026-03-23 01:23:35.851266 | orchestrator | 2026-03-23 01:23:35.851277 | orchestrator | --- 192.168.112.192 ping statistics --- 2026-03-23 01:23:35.851286 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-23 01:23:35.851306 | orchestrator | rtt min/avg/max/mdev = 1.164/2.020/3.583/1.106 ms 2026-03-23 01:23:35.851314 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-03-23 01:23:37.365738 | orchestrator | 2026-03-23 01:23:37 | ERROR  | Unable to get ansible vault password 2026-03-23 01:23:37.365813 | orchestrator | 2026-03-23 01:23:37 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:23:37.365839 | orchestrator | 2026-03-23 01:23:37 | ERROR  | Dropping encrypted entries 2026-03-23 01:23:38.878789 | orchestrator | 2026-03-23 01:23:38 | INFO  | Live migrating server 2e3a3ae5-0b02-41f8-bb1f-dfad340002da 2026-03-23 01:23:52.387414 | orchestrator | 2026-03-23 01:23:52 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:23:54.878072 | orchestrator | 2026-03-23 01:23:54 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:23:57.193761 | orchestrator | 2026-03-23 01:23:57 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:23:59.919560 | orchestrator | 2026-03-23 01:23:59 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:24:02.270215 | orchestrator | 2026-03-23 01:24:02 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:24:04.456896 | orchestrator | 2026-03-23 01:24:04 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:24:06.659002 | orchestrator | 2026-03-23 01:24:06 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:24:08.973648 | orchestrator | 2026-03-23 01:24:08 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:24:11.302147 | orchestrator | 2026-03-23 01:24:11 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) completed with status ACTIVE 2026-03-23 01:24:11.302228 | orchestrator | 2026-03-23 01:24:11 | INFO  | Live migrating server 3cb8204c-2351-4006-bb50-b26c97b2873a 2026-03-23 01:24:24.307192 | orchestrator | 2026-03-23 01:24:24 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:24:26.564786 | orchestrator | 2026-03-23 01:24:26 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:24:29.029954 | orchestrator | 2026-03-23 01:24:29 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:24:31.334949 | orchestrator | 2026-03-23 01:24:31 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:24:33.582148 | orchestrator | 2026-03-23 01:24:33 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:24:35.899367 | orchestrator | 2026-03-23 01:24:35 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:24:38.310861 | orchestrator | 2026-03-23 01:24:38 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:24:40.502651 | orchestrator | 2026-03-23 01:24:40 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:24:42.742408 | orchestrator | 2026-03-23 01:24:42 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) completed with status ACTIVE 2026-03-23 01:24:42.742461 | orchestrator | 2026-03-23 01:24:42 | INFO  | Live migrating server 54f7dcdb-6157-4929-a767-a148c7cd7c17 2026-03-23 01:24:53.832395 | orchestrator | 2026-03-23 01:24:53 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:24:56.077410 | orchestrator | 2026-03-23 01:24:56 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:24:58.501171 | orchestrator | 2026-03-23 01:24:58 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:25:00.747765 | orchestrator | 2026-03-23 01:25:00 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:25:02.977685 | orchestrator | 2026-03-23 01:25:02 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:25:05.213443 | orchestrator | 2026-03-23 01:25:05 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:25:07.461674 | orchestrator | 2026-03-23 01:25:07 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:25:09.854280 | orchestrator | 2026-03-23 01:25:09 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:25:12.238340 | orchestrator | 2026-03-23 01:25:12 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) completed with status ACTIVE 2026-03-23 01:25:12.238441 | orchestrator | 2026-03-23 01:25:12 | INFO  | Live migrating server fb422083-6af7-42f2-b3b4-5b1430583079 2026-03-23 01:25:24.493036 | orchestrator | 2026-03-23 01:25:24 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:25:26.851849 | orchestrator | 2026-03-23 01:25:26 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:25:29.136939 | orchestrator | 2026-03-23 01:25:29 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:25:31.487123 | orchestrator | 2026-03-23 01:25:31 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:25:33.780059 | orchestrator | 2026-03-23 01:25:33 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:25:35.986592 | orchestrator | 2026-03-23 01:25:35 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:25:38.239013 | orchestrator | 2026-03-23 01:25:38 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:25:40.434839 | orchestrator | 2026-03-23 01:25:40 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:25:42.708782 | orchestrator | 2026-03-23 01:25:42 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:25:45.123419 | orchestrator | 2026-03-23 01:25:45 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:25:47.481389 | orchestrator | 2026-03-23 01:25:47 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) completed with status ACTIVE 2026-03-23 01:25:47.756178 | orchestrator | + compute_list 2026-03-23 01:25:47.756227 | orchestrator | + osism manage compute list testbed-node-3 2026-03-23 01:25:49.153080 | orchestrator | 2026-03-23 01:25:49 | ERROR  | Unable to get ansible vault password 2026-03-23 01:25:49.153159 | orchestrator | 2026-03-23 01:25:49 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:25:49.153168 | orchestrator | 2026-03-23 01:25:49 | ERROR  | Dropping encrypted entries 2026-03-23 01:25:51.188764 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:25:51.188855 | orchestrator | | ID | Name | Status | 2026-03-23 01:25:51.188865 | orchestrator | |--------------------------------------+--------+----------| 2026-03-23 01:25:51.188872 | orchestrator | | b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 | test-4 | ACTIVE | 2026-03-23 01:25:51.188904 | orchestrator | | 2e3a3ae5-0b02-41f8-bb1f-dfad340002da | test-3 | ACTIVE | 2026-03-23 01:25:51.188911 | orchestrator | | 3cb8204c-2351-4006-bb50-b26c97b2873a | test-1 | ACTIVE | 2026-03-23 01:25:51.188917 | orchestrator | | 54f7dcdb-6157-4929-a767-a148c7cd7c17 | test-2 | ACTIVE | 2026-03-23 01:25:51.188923 | orchestrator | | fb422083-6af7-42f2-b3b4-5b1430583079 | test | ACTIVE | 2026-03-23 01:25:51.188930 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:25:51.422516 | orchestrator | + osism manage compute list testbed-node-4 2026-03-23 01:25:52.927689 | orchestrator | 2026-03-23 01:25:52 | ERROR  | Unable to get ansible vault password 2026-03-23 01:25:52.927765 | orchestrator | 2026-03-23 01:25:52 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:25:52.927773 | orchestrator | 2026-03-23 01:25:52 | ERROR  | Dropping encrypted entries 2026-03-23 01:25:54.000340 | orchestrator | +------+--------+----------+ 2026-03-23 01:25:54.000397 | orchestrator | | ID | Name | Status | 2026-03-23 01:25:54.000402 | orchestrator | |------+--------+----------| 2026-03-23 01:25:54.000407 | orchestrator | +------+--------+----------+ 2026-03-23 01:25:54.316822 | orchestrator | + osism manage compute list testbed-node-5 2026-03-23 01:25:55.960822 | orchestrator | 2026-03-23 01:25:55 | ERROR  | Unable to get ansible vault password 2026-03-23 01:25:55.961751 | orchestrator | 2026-03-23 01:25:55 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:25:55.961821 | orchestrator | 2026-03-23 01:25:55 | ERROR  | Dropping encrypted entries 2026-03-23 01:25:56.910222 | orchestrator | +------+--------+----------+ 2026-03-23 01:25:56.910284 | orchestrator | | ID | Name | Status | 2026-03-23 01:25:56.910295 | orchestrator | |------+--------+----------| 2026-03-23 01:25:56.910306 | orchestrator | +------+--------+----------+ 2026-03-23 01:25:57.201162 | orchestrator | + server_ping 2026-03-23 01:25:57.201943 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-23 01:25:57.202298 | orchestrator | ++ tr -d '\r' 2026-03-23 01:26:00.019318 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:26:00.019398 | orchestrator | + ping -c3 192.168.112.182 2026-03-23 01:26:00.035480 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-03-23 01:26:00.035575 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=8.91 ms 2026-03-23 01:26:01.029999 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.25 ms 2026-03-23 01:26:02.031286 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.87 ms 2026-03-23 01:26:02.031389 | orchestrator | 2026-03-23 01:26:02.031398 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-03-23 01:26:02.031405 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:26:02.031412 | orchestrator | rtt min/avg/max/mdev = 1.870/4.345/8.914/3.234 ms 2026-03-23 01:26:02.031788 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:26:02.031811 | orchestrator | + ping -c3 192.168.112.145 2026-03-23 01:26:02.042248 | orchestrator | PING 192.168.112.145 (192.168.112.145) 56(84) bytes of data. 2026-03-23 01:26:02.042330 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=1 ttl=63 time=5.53 ms 2026-03-23 01:26:03.040607 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=2 ttl=63 time=2.14 ms 2026-03-23 01:26:04.042781 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=3 ttl=63 time=2.37 ms 2026-03-23 01:26:04.042863 | orchestrator | 2026-03-23 01:26:04.042874 | orchestrator | --- 192.168.112.145 ping statistics --- 2026-03-23 01:26:04.042884 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:26:04.042892 | orchestrator | rtt min/avg/max/mdev = 2.144/3.348/5.526/1.542 ms 2026-03-23 01:26:04.042899 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:26:04.042906 | orchestrator | + ping -c3 192.168.112.197 2026-03-23 01:26:04.053240 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-03-23 01:26:04.053351 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=6.30 ms 2026-03-23 01:26:05.050567 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=1.82 ms 2026-03-23 01:26:06.051895 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.25 ms 2026-03-23 01:26:06.051951 | orchestrator | 2026-03-23 01:26:06.051961 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-03-23 01:26:06.051969 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-23 01:26:06.051976 | orchestrator | rtt min/avg/max/mdev = 1.247/3.122/6.301/2.259 ms 2026-03-23 01:26:06.051984 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:26:06.051991 | orchestrator | + ping -c3 192.168.112.188 2026-03-23 01:26:06.060433 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-03-23 01:26:06.060490 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=4.66 ms 2026-03-23 01:26:07.058283 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=1.26 ms 2026-03-23 01:26:08.059986 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.10 ms 2026-03-23 01:26:08.060107 | orchestrator | 2026-03-23 01:26:08.060119 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-03-23 01:26:08.060126 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:26:08.060134 | orchestrator | rtt min/avg/max/mdev = 1.098/2.340/4.660/1.641 ms 2026-03-23 01:26:08.060426 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:26:08.060441 | orchestrator | + ping -c3 192.168.112.192 2026-03-23 01:26:08.069367 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2026-03-23 01:26:08.069416 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=4.47 ms 2026-03-23 01:26:09.068091 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=1.38 ms 2026-03-23 01:26:10.069762 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.40 ms 2026-03-23 01:26:10.069810 | orchestrator | 2026-03-23 01:26:10.069816 | orchestrator | --- 192.168.112.192 ping statistics --- 2026-03-23 01:26:10.069821 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:26:10.069825 | orchestrator | rtt min/avg/max/mdev = 1.380/2.415/4.471/1.453 ms 2026-03-23 01:26:10.069830 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-03-23 01:26:11.630957 | orchestrator | 2026-03-23 01:26:11 | ERROR  | Unable to get ansible vault password 2026-03-23 01:26:11.631007 | orchestrator | 2026-03-23 01:26:11 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:26:11.631015 | orchestrator | 2026-03-23 01:26:11 | ERROR  | Dropping encrypted entries 2026-03-23 01:26:13.381406 | orchestrator | 2026-03-23 01:26:13 | INFO  | Live migrating server b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 2026-03-23 01:26:24.651912 | orchestrator | 2026-03-23 01:26:24 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:26:27.057278 | orchestrator | 2026-03-23 01:26:27 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:26:29.367864 | orchestrator | 2026-03-23 01:26:29 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:26:31.614493 | orchestrator | 2026-03-23 01:26:31 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:26:33.970723 | orchestrator | 2026-03-23 01:26:33 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:26:36.318725 | orchestrator | 2026-03-23 01:26:36 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:26:38.613620 | orchestrator | 2026-03-23 01:26:38 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:26:40.812862 | orchestrator | 2026-03-23 01:26:40 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:26:43.172421 | orchestrator | 2026-03-23 01:26:43 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:26:45.565276 | orchestrator | 2026-03-23 01:26:45 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:26:47.764985 | orchestrator | 2026-03-23 01:26:47 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:26:49.963013 | orchestrator | 2026-03-23 01:26:49 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) completed with status ACTIVE 2026-03-23 01:26:49.963064 | orchestrator | 2026-03-23 01:26:49 | INFO  | Live migrating server 2e3a3ae5-0b02-41f8-bb1f-dfad340002da 2026-03-23 01:26:59.623806 | orchestrator | 2026-03-23 01:26:59 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:27:02.062495 | orchestrator | 2026-03-23 01:27:02 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:27:04.334204 | orchestrator | 2026-03-23 01:27:04 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:27:06.616272 | orchestrator | 2026-03-23 01:27:06 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:27:08.884498 | orchestrator | 2026-03-23 01:27:08 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:27:11.075122 | orchestrator | 2026-03-23 01:27:11 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:27:13.383659 | orchestrator | 2026-03-23 01:27:13 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:27:15.645157 | orchestrator | 2026-03-23 01:27:15 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:27:17.904546 | orchestrator | 2026-03-23 01:27:17 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) completed with status ACTIVE 2026-03-23 01:27:17.904672 | orchestrator | 2026-03-23 01:27:17 | INFO  | Live migrating server 3cb8204c-2351-4006-bb50-b26c97b2873a 2026-03-23 01:27:27.645193 | orchestrator | 2026-03-23 01:27:27 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:27:29.954811 | orchestrator | 2026-03-23 01:27:29 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:27:32.317464 | orchestrator | 2026-03-23 01:27:32 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:27:34.660739 | orchestrator | 2026-03-23 01:27:34 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:27:36.992013 | orchestrator | 2026-03-23 01:27:36 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:27:39.312579 | orchestrator | 2026-03-23 01:27:39 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:27:41.525508 | orchestrator | 2026-03-23 01:27:41 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:27:43.807797 | orchestrator | 2026-03-23 01:27:43 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:27:46.147363 | orchestrator | 2026-03-23 01:27:46 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) completed with status ACTIVE 2026-03-23 01:27:46.147441 | orchestrator | 2026-03-23 01:27:46 | INFO  | Live migrating server 54f7dcdb-6157-4929-a767-a148c7cd7c17 2026-03-23 01:27:57.818585 | orchestrator | 2026-03-23 01:27:57 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:28:00.140908 | orchestrator | 2026-03-23 01:28:00 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:28:02.528736 | orchestrator | 2026-03-23 01:28:02 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:28:04.746877 | orchestrator | 2026-03-23 01:28:04 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:28:07.008750 | orchestrator | 2026-03-23 01:28:07 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:28:09.348791 | orchestrator | 2026-03-23 01:28:09 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:28:11.668511 | orchestrator | 2026-03-23 01:28:11 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:28:14.012247 | orchestrator | 2026-03-23 01:28:14 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:28:16.343800 | orchestrator | 2026-03-23 01:28:16 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) completed with status ACTIVE 2026-03-23 01:28:16.343887 | orchestrator | 2026-03-23 01:28:16 | INFO  | Live migrating server fb422083-6af7-42f2-b3b4-5b1430583079 2026-03-23 01:28:25.532407 | orchestrator | 2026-03-23 01:28:25 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:28:27.891025 | orchestrator | 2026-03-23 01:28:27 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:28:30.235594 | orchestrator | 2026-03-23 01:28:30 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:28:32.580598 | orchestrator | 2026-03-23 01:28:32 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:28:34.930618 | orchestrator | 2026-03-23 01:28:34 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:28:37.261283 | orchestrator | 2026-03-23 01:28:37 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:28:39.628991 | orchestrator | 2026-03-23 01:28:39 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:28:41.853436 | orchestrator | 2026-03-23 01:28:41 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:28:44.230165 | orchestrator | 2026-03-23 01:28:44 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:28:46.513599 | orchestrator | 2026-03-23 01:28:46 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:28:48.785131 | orchestrator | 2026-03-23 01:28:48 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) completed with status ACTIVE 2026-03-23 01:28:49.065396 | orchestrator | + compute_list 2026-03-23 01:28:49.065468 | orchestrator | + osism manage compute list testbed-node-3 2026-03-23 01:28:50.665813 | orchestrator | 2026-03-23 01:28:50 | ERROR  | Unable to get ansible vault password 2026-03-23 01:28:50.665959 | orchestrator | 2026-03-23 01:28:50 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:28:50.665980 | orchestrator | 2026-03-23 01:28:50 | ERROR  | Dropping encrypted entries 2026-03-23 01:28:51.771272 | orchestrator | +------+--------+----------+ 2026-03-23 01:28:51.771334 | orchestrator | | ID | Name | Status | 2026-03-23 01:28:51.771345 | orchestrator | |------+--------+----------| 2026-03-23 01:28:51.771350 | orchestrator | +------+--------+----------+ 2026-03-23 01:28:52.052222 | orchestrator | + osism manage compute list testbed-node-4 2026-03-23 01:28:53.605822 | orchestrator | 2026-03-23 01:28:53 | ERROR  | Unable to get ansible vault password 2026-03-23 01:28:53.605869 | orchestrator | 2026-03-23 01:28:53 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:28:53.605876 | orchestrator | 2026-03-23 01:28:53 | ERROR  | Dropping encrypted entries 2026-03-23 01:28:54.959499 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:28:54.959563 | orchestrator | | ID | Name | Status | 2026-03-23 01:28:54.959574 | orchestrator | |--------------------------------------+--------+----------| 2026-03-23 01:28:54.959582 | orchestrator | | b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 | test-4 | ACTIVE | 2026-03-23 01:28:54.959588 | orchestrator | | 2e3a3ae5-0b02-41f8-bb1f-dfad340002da | test-3 | ACTIVE | 2026-03-23 01:28:54.959595 | orchestrator | | 3cb8204c-2351-4006-bb50-b26c97b2873a | test-1 | ACTIVE | 2026-03-23 01:28:54.959602 | orchestrator | | 54f7dcdb-6157-4929-a767-a148c7cd7c17 | test-2 | ACTIVE | 2026-03-23 01:28:54.959609 | orchestrator | | fb422083-6af7-42f2-b3b4-5b1430583079 | test | ACTIVE | 2026-03-23 01:28:54.959617 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:28:55.255355 | orchestrator | + osism manage compute list testbed-node-5 2026-03-23 01:28:56.852971 | orchestrator | 2026-03-23 01:28:56 | ERROR  | Unable to get ansible vault password 2026-03-23 01:28:56.853071 | orchestrator | 2026-03-23 01:28:56 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:28:56.853088 | orchestrator | 2026-03-23 01:28:56 | ERROR  | Dropping encrypted entries 2026-03-23 01:28:57.929510 | orchestrator | +------+--------+----------+ 2026-03-23 01:28:57.929590 | orchestrator | | ID | Name | Status | 2026-03-23 01:28:57.929598 | orchestrator | |------+--------+----------| 2026-03-23 01:28:57.929603 | orchestrator | +------+--------+----------+ 2026-03-23 01:28:58.213405 | orchestrator | + server_ping 2026-03-23 01:28:58.214908 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-23 01:28:58.214953 | orchestrator | ++ tr -d '\r' 2026-03-23 01:29:00.956132 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:29:00.956243 | orchestrator | + ping -c3 192.168.112.182 2026-03-23 01:29:00.968092 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-03-23 01:29:00.968159 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=9.67 ms 2026-03-23 01:29:01.962850 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=2.30 ms 2026-03-23 01:29:02.964050 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=2.22 ms 2026-03-23 01:29:02.964142 | orchestrator | 2026-03-23 01:29:02.964152 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-03-23 01:29:02.964160 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:29:02.964168 | orchestrator | rtt min/avg/max/mdev = 2.217/4.729/9.673/3.496 ms 2026-03-23 01:29:02.964727 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:29:02.964776 | orchestrator | + ping -c3 192.168.112.145 2026-03-23 01:29:02.974977 | orchestrator | PING 192.168.112.145 (192.168.112.145) 56(84) bytes of data. 2026-03-23 01:29:02.975071 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=1 ttl=63 time=6.27 ms 2026-03-23 01:29:03.972662 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=2 ttl=63 time=2.24 ms 2026-03-23 01:29:04.973793 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=3 ttl=63 time=1.74 ms 2026-03-23 01:29:04.974153 | orchestrator | 2026-03-23 01:29:04.974185 | orchestrator | --- 192.168.112.145 ping statistics --- 2026-03-23 01:29:04.974194 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:29:04.974253 | orchestrator | rtt min/avg/max/mdev = 1.743/3.416/6.267/2.025 ms 2026-03-23 01:29:04.975038 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:29:04.975074 | orchestrator | + ping -c3 192.168.112.197 2026-03-23 01:29:04.986371 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-03-23 01:29:04.986439 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=7.30 ms 2026-03-23 01:29:05.983728 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=3.00 ms 2026-03-23 01:29:06.984059 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.95 ms 2026-03-23 01:29:06.984359 | orchestrator | 2026-03-23 01:29:06.984375 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-03-23 01:29:06.984384 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:29:06.984393 | orchestrator | rtt min/avg/max/mdev = 1.945/4.082/7.303/2.317 ms 2026-03-23 01:29:06.985289 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:29:06.985306 | orchestrator | + ping -c3 192.168.112.188 2026-03-23 01:29:06.995521 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-03-23 01:29:06.995607 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=6.20 ms 2026-03-23 01:29:07.993374 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=1.99 ms 2026-03-23 01:29:08.994481 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.76 ms 2026-03-23 01:29:08.994569 | orchestrator | 2026-03-23 01:29:08.994580 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-03-23 01:29:08.994591 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:29:08.994599 | orchestrator | rtt min/avg/max/mdev = 1.758/3.316/6.198/2.040 ms 2026-03-23 01:29:08.995374 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:29:08.995432 | orchestrator | + ping -c3 192.168.112.192 2026-03-23 01:29:09.006355 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2026-03-23 01:29:09.006444 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=6.21 ms 2026-03-23 01:29:10.004294 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.14 ms 2026-03-23 01:29:11.004947 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.88 ms 2026-03-23 01:29:11.005040 | orchestrator | 2026-03-23 01:29:11.005051 | orchestrator | --- 192.168.112.192 ping statistics --- 2026-03-23 01:29:11.005060 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:29:11.005067 | orchestrator | rtt min/avg/max/mdev = 1.881/3.408/6.209/1.983 ms 2026-03-23 01:29:11.005697 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-03-23 01:29:12.560999 | orchestrator | 2026-03-23 01:29:12 | ERROR  | Unable to get ansible vault password 2026-03-23 01:29:12.561068 | orchestrator | 2026-03-23 01:29:12 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:29:12.561076 | orchestrator | 2026-03-23 01:29:12 | ERROR  | Dropping encrypted entries 2026-03-23 01:29:14.073187 | orchestrator | 2026-03-23 01:29:14 | INFO  | Live migrating server b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 2026-03-23 01:29:23.418733 | orchestrator | 2026-03-23 01:29:23 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:29:25.767767 | orchestrator | 2026-03-23 01:29:25 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:29:28.027280 | orchestrator | 2026-03-23 01:29:28 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:29:30.234585 | orchestrator | 2026-03-23 01:29:30 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:29:32.486655 | orchestrator | 2026-03-23 01:29:32 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:29:34.701496 | orchestrator | 2026-03-23 01:29:34 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:29:36.982149 | orchestrator | 2026-03-23 01:29:36 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:29:39.259338 | orchestrator | 2026-03-23 01:29:39 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) is still in progress 2026-03-23 01:29:41.530861 | orchestrator | 2026-03-23 01:29:41 | INFO  | Live migration of b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 (test-4) completed with status ACTIVE 2026-03-23 01:29:41.530952 | orchestrator | 2026-03-23 01:29:41 | INFO  | Live migrating server 2e3a3ae5-0b02-41f8-bb1f-dfad340002da 2026-03-23 01:29:51.322508 | orchestrator | 2026-03-23 01:29:51 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:29:53.652985 | orchestrator | 2026-03-23 01:29:53 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:29:56.031102 | orchestrator | 2026-03-23 01:29:56 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:29:58.300265 | orchestrator | 2026-03-23 01:29:58 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:30:00.699472 | orchestrator | 2026-03-23 01:30:00 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:30:03.021881 | orchestrator | 2026-03-23 01:30:03 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:30:05.278616 | orchestrator | 2026-03-23 01:30:05 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:30:07.537799 | orchestrator | 2026-03-23 01:30:07 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) is still in progress 2026-03-23 01:30:09.793423 | orchestrator | 2026-03-23 01:30:09 | INFO  | Live migration of 2e3a3ae5-0b02-41f8-bb1f-dfad340002da (test-3) completed with status ACTIVE 2026-03-23 01:30:09.793484 | orchestrator | 2026-03-23 01:30:09 | INFO  | Live migrating server 3cb8204c-2351-4006-bb50-b26c97b2873a 2026-03-23 01:30:20.890711 | orchestrator | 2026-03-23 01:30:20 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:30:23.214430 | orchestrator | 2026-03-23 01:30:23 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:30:25.556688 | orchestrator | 2026-03-23 01:30:25 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:30:27.820038 | orchestrator | 2026-03-23 01:30:27 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:30:30.124874 | orchestrator | 2026-03-23 01:30:30 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:30:32.436233 | orchestrator | 2026-03-23 01:30:32 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:30:34.771379 | orchestrator | 2026-03-23 01:30:34 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:30:36.985429 | orchestrator | 2026-03-23 01:30:36 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) is still in progress 2026-03-23 01:30:39.366586 | orchestrator | 2026-03-23 01:30:39 | INFO  | Live migration of 3cb8204c-2351-4006-bb50-b26c97b2873a (test-1) completed with status ACTIVE 2026-03-23 01:30:39.366658 | orchestrator | 2026-03-23 01:30:39 | INFO  | Live migrating server 54f7dcdb-6157-4929-a767-a148c7cd7c17 2026-03-23 01:30:49.679731 | orchestrator | 2026-03-23 01:30:49 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:30:51.998887 | orchestrator | 2026-03-23 01:30:51 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:30:54.420736 | orchestrator | 2026-03-23 01:30:54 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:30:57.004081 | orchestrator | 2026-03-23 01:30:57 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:30:59.315637 | orchestrator | 2026-03-23 01:30:59 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:31:01.620549 | orchestrator | 2026-03-23 01:31:01 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:31:03.894333 | orchestrator | 2026-03-23 01:31:03 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:31:06.374858 | orchestrator | 2026-03-23 01:31:06 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) is still in progress 2026-03-23 01:31:08.702082 | orchestrator | 2026-03-23 01:31:08 | INFO  | Live migration of 54f7dcdb-6157-4929-a767-a148c7cd7c17 (test-2) completed with status ACTIVE 2026-03-23 01:31:08.702165 | orchestrator | 2026-03-23 01:31:08 | INFO  | Live migrating server fb422083-6af7-42f2-b3b4-5b1430583079 2026-03-23 01:31:18.919975 | orchestrator | 2026-03-23 01:31:18 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:31:21.268189 | orchestrator | 2026-03-23 01:31:21 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:31:23.557166 | orchestrator | 2026-03-23 01:31:23 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:31:25.857587 | orchestrator | 2026-03-23 01:31:25 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:31:28.190645 | orchestrator | 2026-03-23 01:31:28 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:31:30.620488 | orchestrator | 2026-03-23 01:31:30 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:31:32.901159 | orchestrator | 2026-03-23 01:31:32 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:31:35.202825 | orchestrator | 2026-03-23 01:31:35 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:31:37.739681 | orchestrator | 2026-03-23 01:31:37 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) is still in progress 2026-03-23 01:31:40.099110 | orchestrator | 2026-03-23 01:31:40 | INFO  | Live migration of fb422083-6af7-42f2-b3b4-5b1430583079 (test) completed with status ACTIVE 2026-03-23 01:31:40.436737 | orchestrator | + compute_list 2026-03-23 01:31:40.436818 | orchestrator | + osism manage compute list testbed-node-3 2026-03-23 01:31:42.037023 | orchestrator | 2026-03-23 01:31:42 | ERROR  | Unable to get ansible vault password 2026-03-23 01:31:42.037082 | orchestrator | 2026-03-23 01:31:42 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:31:42.037093 | orchestrator | 2026-03-23 01:31:42 | ERROR  | Dropping encrypted entries 2026-03-23 01:31:43.102973 | orchestrator | +------+--------+----------+ 2026-03-23 01:31:43.103030 | orchestrator | | ID | Name | Status | 2026-03-23 01:31:43.103040 | orchestrator | |------+--------+----------| 2026-03-23 01:31:43.103055 | orchestrator | +------+--------+----------+ 2026-03-23 01:31:43.428632 | orchestrator | + osism manage compute list testbed-node-4 2026-03-23 01:31:45.072568 | orchestrator | 2026-03-23 01:31:45 | ERROR  | Unable to get ansible vault password 2026-03-23 01:31:45.072678 | orchestrator | 2026-03-23 01:31:45 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:31:45.072698 | orchestrator | 2026-03-23 01:31:45 | ERROR  | Dropping encrypted entries 2026-03-23 01:31:46.171139 | orchestrator | +------+--------+----------+ 2026-03-23 01:31:46.171226 | orchestrator | | ID | Name | Status | 2026-03-23 01:31:46.171239 | orchestrator | |------+--------+----------| 2026-03-23 01:31:46.171243 | orchestrator | +------+--------+----------+ 2026-03-23 01:31:46.481525 | orchestrator | + osism manage compute list testbed-node-5 2026-03-23 01:31:48.086800 | orchestrator | 2026-03-23 01:31:48 | ERROR  | Unable to get ansible vault password 2026-03-23 01:31:48.086887 | orchestrator | 2026-03-23 01:31:48 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-23 01:31:48.086907 | orchestrator | 2026-03-23 01:31:48 | ERROR  | Dropping encrypted entries 2026-03-23 01:31:49.571061 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:31:49.571144 | orchestrator | | ID | Name | Status | 2026-03-23 01:31:49.571155 | orchestrator | |--------------------------------------+--------+----------| 2026-03-23 01:31:49.571162 | orchestrator | | b6ac3ee9-4469-4652-9dd9-06dc3b2b50e6 | test-4 | ACTIVE | 2026-03-23 01:31:49.571169 | orchestrator | | 2e3a3ae5-0b02-41f8-bb1f-dfad340002da | test-3 | ACTIVE | 2026-03-23 01:31:49.571176 | orchestrator | | 3cb8204c-2351-4006-bb50-b26c97b2873a | test-1 | ACTIVE | 2026-03-23 01:31:49.571182 | orchestrator | | 54f7dcdb-6157-4929-a767-a148c7cd7c17 | test-2 | ACTIVE | 2026-03-23 01:31:49.571189 | orchestrator | | fb422083-6af7-42f2-b3b4-5b1430583079 | test | ACTIVE | 2026-03-23 01:31:49.571196 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-23 01:31:49.887538 | orchestrator | + server_ping 2026-03-23 01:31:49.888880 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-23 01:31:49.888930 | orchestrator | ++ tr -d '\r' 2026-03-23 01:31:52.970503 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:31:52.970576 | orchestrator | + ping -c3 192.168.112.182 2026-03-23 01:31:52.980453 | orchestrator | PING 192.168.112.182 (192.168.112.182) 56(84) bytes of data. 2026-03-23 01:31:52.980528 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=1 ttl=63 time=7.10 ms 2026-03-23 01:31:53.975875 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=2 ttl=63 time=1.45 ms 2026-03-23 01:31:54.977971 | orchestrator | 64 bytes from 192.168.112.182: icmp_seq=3 ttl=63 time=1.29 ms 2026-03-23 01:31:54.978037 | orchestrator | 2026-03-23 01:31:54.978044 | orchestrator | --- 192.168.112.182 ping statistics --- 2026-03-23 01:31:54.978049 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:31:54.978054 | orchestrator | rtt min/avg/max/mdev = 1.289/3.280/7.104/2.704 ms 2026-03-23 01:31:54.978831 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:31:54.978856 | orchestrator | + ping -c3 192.168.112.145 2026-03-23 01:31:54.986728 | orchestrator | PING 192.168.112.145 (192.168.112.145) 56(84) bytes of data. 2026-03-23 01:31:54.986777 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=1 ttl=63 time=3.80 ms 2026-03-23 01:31:55.987516 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=2 ttl=63 time=2.44 ms 2026-03-23 01:31:56.987827 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=3 ttl=63 time=1.41 ms 2026-03-23 01:31:56.987900 | orchestrator | 2026-03-23 01:31:56.987908 | orchestrator | --- 192.168.112.145 ping statistics --- 2026-03-23 01:31:56.987913 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:31:56.987917 | orchestrator | rtt min/avg/max/mdev = 1.408/2.547/3.800/0.979 ms 2026-03-23 01:31:56.988795 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:31:56.988821 | orchestrator | + ping -c3 192.168.112.197 2026-03-23 01:31:56.994410 | orchestrator | PING 192.168.112.197 (192.168.112.197) 56(84) bytes of data. 2026-03-23 01:31:56.994459 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=1 ttl=63 time=3.19 ms 2026-03-23 01:31:57.994287 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=2 ttl=63 time=1.44 ms 2026-03-23 01:31:58.996250 | orchestrator | 64 bytes from 192.168.112.197: icmp_seq=3 ttl=63 time=1.50 ms 2026-03-23 01:31:58.996659 | orchestrator | 2026-03-23 01:31:58.996686 | orchestrator | --- 192.168.112.197 ping statistics --- 2026-03-23 01:31:58.996695 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:31:58.996704 | orchestrator | rtt min/avg/max/mdev = 1.441/2.042/3.187/0.809 ms 2026-03-23 01:31:58.997083 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:31:58.997203 | orchestrator | + ping -c3 192.168.112.188 2026-03-23 01:31:59.007857 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-03-23 01:31:59.007928 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=5.45 ms 2026-03-23 01:32:00.004590 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.12 ms 2026-03-23 01:32:01.006203 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.80 ms 2026-03-23 01:32:01.006280 | orchestrator | 2026-03-23 01:32:01.006288 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-03-23 01:32:01.006295 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-23 01:32:01.006370 | orchestrator | rtt min/avg/max/mdev = 1.798/3.120/5.449/1.651 ms 2026-03-23 01:32:01.006453 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-23 01:32:01.006462 | orchestrator | + ping -c3 192.168.112.192 2026-03-23 01:32:01.016789 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2026-03-23 01:32:01.016876 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=5.48 ms 2026-03-23 01:32:02.015762 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.97 ms 2026-03-23 01:32:03.016070 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.93 ms 2026-03-23 01:32:03.016153 | orchestrator | 2026-03-23 01:32:03.016187 | orchestrator | --- 192.168.112.192 ping statistics --- 2026-03-23 01:32:03.016196 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-23 01:32:03.016203 | orchestrator | rtt min/avg/max/mdev = 1.926/3.459/5.484/1.493 ms 2026-03-23 01:32:03.239898 | orchestrator | ok: Runtime: 0:16:43.845681 2026-03-23 01:32:03.302295 | 2026-03-23 01:32:03.302429 | TASK [Run tempest] 2026-03-23 01:32:04.042962 | orchestrator | 2026-03-23 01:32:04.043085 | orchestrator | # Tempest 2026-03-23 01:32:04.043094 | orchestrator | 2026-03-23 01:32:04.043099 | orchestrator | + set -e 2026-03-23 01:32:04.043107 | orchestrator | + source /opt/manager-vars.sh 2026-03-23 01:32:04.043114 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-23 01:32:04.043122 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-23 01:32:04.043141 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-23 01:32:04.043149 | orchestrator | ++ CEPH_VERSION=reef 2026-03-23 01:32:04.043155 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-23 01:32:04.043160 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-23 01:32:04.043169 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-23 01:32:04.043175 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-23 01:32:04.043179 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-23 01:32:04.043187 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-23 01:32:04.043190 | orchestrator | ++ export ARA=false 2026-03-23 01:32:04.043194 | orchestrator | ++ ARA=false 2026-03-23 01:32:04.043203 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-23 01:32:04.043207 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-23 01:32:04.043211 | orchestrator | ++ export TEMPEST=true 2026-03-23 01:32:04.043218 | orchestrator | ++ TEMPEST=true 2026-03-23 01:32:04.043222 | orchestrator | ++ export IS_ZUUL=true 2026-03-23 01:32:04.043226 | orchestrator | ++ IS_ZUUL=true 2026-03-23 01:32:04.043230 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.169 2026-03-23 01:32:04.043234 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.169 2026-03-23 01:32:04.043238 | orchestrator | ++ export EXTERNAL_API=false 2026-03-23 01:32:04.043242 | orchestrator | ++ EXTERNAL_API=false 2026-03-23 01:32:04.043246 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-23 01:32:04.043249 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-23 01:32:04.043253 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-23 01:32:04.043257 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-23 01:32:04.043261 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-23 01:32:04.043264 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-23 01:32:04.043268 | orchestrator | + echo 2026-03-23 01:32:04.043272 | orchestrator | + echo '# Tempest' 2026-03-23 01:32:04.043276 | orchestrator | + echo 2026-03-23 01:32:04.043280 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-03-23 01:32:04.043284 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-03-23 01:32:05.394646 | orchestrator | 2026-03-23 01:32:05 | INFO  | Prepare task for execution of tempest. 2026-03-23 01:32:05.465217 | orchestrator | 2026-03-23 01:32:05 | INFO  | Task a571118b-e0d3-4546-90a3-05d0db28f453 (tempest) was prepared for execution. 2026-03-23 01:32:05.465295 | orchestrator | 2026-03-23 01:32:05 | INFO  | It takes a moment until task a571118b-e0d3-4546-90a3-05d0db28f453 (tempest) has been started and output is visible here. 2026-03-23 01:33:20.484258 | orchestrator | 2026-03-23 01:33:20.484357 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-03-23 01:33:20.484403 | orchestrator | 2026-03-23 01:33:20.484408 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-03-23 01:33:20.484423 | orchestrator | Monday 23 March 2026 01:32:08 +0000 (0:00:00.320) 0:00:00.320 ********** 2026-03-23 01:33:20.484427 | orchestrator | changed: [testbed-manager] 2026-03-23 01:33:20.484432 | orchestrator | 2026-03-23 01:33:20.484436 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-03-23 01:33:20.484440 | orchestrator | Monday 23 March 2026 01:32:09 +0000 (0:00:01.054) 0:00:01.375 ********** 2026-03-23 01:33:20.484444 | orchestrator | changed: [testbed-manager] 2026-03-23 01:33:20.484448 | orchestrator | 2026-03-23 01:33:20.484452 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-03-23 01:33:20.484456 | orchestrator | Monday 23 March 2026 01:32:11 +0000 (0:00:01.283) 0:00:02.658 ********** 2026-03-23 01:33:20.484460 | orchestrator | ok: [testbed-manager] 2026-03-23 01:33:20.484465 | orchestrator | 2026-03-23 01:33:20.484469 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-03-23 01:33:20.484473 | orchestrator | Monday 23 March 2026 01:32:11 +0000 (0:00:00.411) 0:00:03.070 ********** 2026-03-23 01:33:20.484477 | orchestrator | changed: [testbed-manager] 2026-03-23 01:33:20.484481 | orchestrator | 2026-03-23 01:33:20.484485 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-03-23 01:33:20.484489 | orchestrator | Monday 23 March 2026 01:32:32 +0000 (0:00:20.473) 0:00:23.544 ********** 2026-03-23 01:33:20.484513 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-03-23 01:33:20.484517 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-03-23 01:33:20.484523 | orchestrator | 2026-03-23 01:33:20.484527 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-03-23 01:33:20.484531 | orchestrator | Monday 23 March 2026 01:32:39 +0000 (0:00:07.727) 0:00:31.271 ********** 2026-03-23 01:33:20.484535 | orchestrator | ok: [testbed-manager] => { 2026-03-23 01:33:20.484539 | orchestrator |  "changed": false, 2026-03-23 01:33:20.484543 | orchestrator |  "msg": "All assertions passed" 2026-03-23 01:33:20.484547 | orchestrator | } 2026-03-23 01:33:20.484551 | orchestrator | 2026-03-23 01:33:20.484555 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-03-23 01:33:20.484559 | orchestrator | Monday 23 March 2026 01:32:40 +0000 (0:00:00.199) 0:00:31.470 ********** 2026-03-23 01:33:20.484562 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:33:20.484566 | orchestrator | 2026-03-23 01:33:20.484570 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-03-23 01:33:20.484574 | orchestrator | Monday 23 March 2026 01:32:43 +0000 (0:00:03.534) 0:00:35.005 ********** 2026-03-23 01:33:20.484578 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:33:20.484581 | orchestrator | 2026-03-23 01:33:20.484585 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-03-23 01:33:20.484589 | orchestrator | Monday 23 March 2026 01:32:45 +0000 (0:00:01.847) 0:00:36.853 ********** 2026-03-23 01:33:20.484593 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:33:20.484596 | orchestrator | 2026-03-23 01:33:20.484600 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-03-23 01:33:20.484604 | orchestrator | Monday 23 March 2026 01:32:49 +0000 (0:00:03.669) 0:00:40.522 ********** 2026-03-23 01:33:20.484608 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:33:20.484612 | orchestrator | 2026-03-23 01:33:20.484615 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-03-23 01:33:20.484619 | orchestrator | Monday 23 March 2026 01:32:49 +0000 (0:00:00.182) 0:00:40.705 ********** 2026-03-23 01:33:20.484623 | orchestrator | changed: [testbed-manager] 2026-03-23 01:33:20.484627 | orchestrator | 2026-03-23 01:33:20.484631 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-03-23 01:33:20.484634 | orchestrator | Monday 23 March 2026 01:32:52 +0000 (0:00:02.980) 0:00:43.686 ********** 2026-03-23 01:33:20.484638 | orchestrator | changed: [testbed-manager] 2026-03-23 01:33:20.484642 | orchestrator | 2026-03-23 01:33:20.484646 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-03-23 01:33:20.484649 | orchestrator | Monday 23 March 2026 01:33:00 +0000 (0:00:08.620) 0:00:52.306 ********** 2026-03-23 01:33:20.484653 | orchestrator | changed: [testbed-manager] 2026-03-23 01:33:20.484657 | orchestrator | 2026-03-23 01:33:20.484661 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-03-23 01:33:20.484665 | orchestrator | Monday 23 March 2026 01:33:01 +0000 (0:00:00.672) 0:00:52.979 ********** 2026-03-23 01:33:20.484668 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:33:20.484672 | orchestrator | 2026-03-23 01:33:20.484676 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-03-23 01:33:20.484680 | orchestrator | Monday 23 March 2026 01:33:03 +0000 (0:00:01.519) 0:00:54.499 ********** 2026-03-23 01:33:20.484683 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:33:20.484687 | orchestrator | 2026-03-23 01:33:20.484691 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-03-23 01:33:20.484715 | orchestrator | Monday 23 March 2026 01:33:04 +0000 (0:00:01.559) 0:00:56.058 ********** 2026-03-23 01:33:20.484719 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:33:20.484722 | orchestrator | 2026-03-23 01:33:20.484726 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-03-23 01:33:20.484740 | orchestrator | Monday 23 March 2026 01:33:04 +0000 (0:00:00.187) 0:00:56.246 ********** 2026-03-23 01:33:20.484745 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:33:20.484748 | orchestrator | 2026-03-23 01:33:20.484757 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-03-23 01:33:20.484761 | orchestrator | Monday 23 March 2026 01:33:05 +0000 (0:00:00.356) 0:00:56.603 ********** 2026-03-23 01:33:20.484765 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-23 01:33:20.484769 | orchestrator | 2026-03-23 01:33:20.484772 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-03-23 01:33:20.484789 | orchestrator | Monday 23 March 2026 01:33:09 +0000 (0:00:03.967) 0:01:00.570 ********** 2026-03-23 01:33:20.484793 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-03-23 01:33:20.484797 | orchestrator |  "changed": false, 2026-03-23 01:33:20.484801 | orchestrator |  "msg": "All assertions passed" 2026-03-23 01:33:20.484805 | orchestrator | } 2026-03-23 01:33:20.484809 | orchestrator | 2026-03-23 01:33:20.484813 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-03-23 01:33:20.484817 | orchestrator | Monday 23 March 2026 01:33:09 +0000 (0:00:00.191) 0:01:00.762 ********** 2026-03-23 01:33:20.484821 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-03-23 01:33:20.484825 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-03-23 01:33:20.484829 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:33:20.484833 | orchestrator | 2026-03-23 01:33:20.484837 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-03-23 01:33:20.484841 | orchestrator | Monday 23 March 2026 01:33:09 +0000 (0:00:00.168) 0:01:00.930 ********** 2026-03-23 01:33:20.484844 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:33:20.484848 | orchestrator | 2026-03-23 01:33:20.484852 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-03-23 01:33:20.484856 | orchestrator | Monday 23 March 2026 01:33:09 +0000 (0:00:00.155) 0:01:01.086 ********** 2026-03-23 01:33:20.484860 | orchestrator | ok: [testbed-manager] 2026-03-23 01:33:20.484863 | orchestrator | 2026-03-23 01:33:20.484867 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-03-23 01:33:20.484871 | orchestrator | Monday 23 March 2026 01:33:10 +0000 (0:00:00.475) 0:01:01.561 ********** 2026-03-23 01:33:20.484875 | orchestrator | changed: [testbed-manager] 2026-03-23 01:33:20.484878 | orchestrator | 2026-03-23 01:33:20.484882 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-03-23 01:33:20.484886 | orchestrator | Monday 23 March 2026 01:33:11 +0000 (0:00:00.897) 0:01:02.458 ********** 2026-03-23 01:33:20.484890 | orchestrator | ok: [testbed-manager] 2026-03-23 01:33:20.484894 | orchestrator | 2026-03-23 01:33:20.484898 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-03-23 01:33:20.484902 | orchestrator | Monday 23 March 2026 01:33:11 +0000 (0:00:00.420) 0:01:02.879 ********** 2026-03-23 01:33:20.484906 | orchestrator | skipping: [testbed-manager] 2026-03-23 01:33:20.484909 | orchestrator | 2026-03-23 01:33:20.484913 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-03-23 01:33:20.484917 | orchestrator | Monday 23 March 2026 01:33:11 +0000 (0:00:00.296) 0:01:03.176 ********** 2026-03-23 01:33:20.484921 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-03-23 01:33:20.484925 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-03-23 01:33:20.484929 | orchestrator | 2026-03-23 01:33:20.484932 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-03-23 01:33:20.484936 | orchestrator | Monday 23 March 2026 01:33:19 +0000 (0:00:07.670) 0:01:10.847 ********** 2026-03-23 01:33:20.484940 | orchestrator | changed: [testbed-manager] 2026-03-23 01:33:20.484947 | orchestrator | 2026-03-23 01:33:20.484951 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-23 01:33:20.484970 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-23 01:33:20.484975 | orchestrator | 2026-03-23 01:33:20.484979 | orchestrator | 2026-03-23 01:33:20.484983 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-23 01:33:20.484986 | orchestrator | Monday 23 March 2026 01:33:20 +0000 (0:00:01.002) 0:01:11.850 ********** 2026-03-23 01:33:20.484990 | orchestrator | =============================================================================== 2026-03-23 01:33:20.484994 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 20.47s 2026-03-23 01:33:20.484998 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 8.62s 2026-03-23 01:33:20.485002 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 7.73s 2026-03-23 01:33:20.485005 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.67s 2026-03-23 01:33:20.485014 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.97s 2026-03-23 01:33:20.485017 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.67s 2026-03-23 01:33:20.485021 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.53s 2026-03-23 01:33:20.485025 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.98s 2026-03-23 01:33:20.485029 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.85s 2026-03-23 01:33:20.485033 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.56s 2026-03-23 01:33:20.485036 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.52s 2026-03-23 01:33:20.485040 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.28s 2026-03-23 01:33:20.485044 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.05s 2026-03-23 01:33:20.485048 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.00s 2026-03-23 01:33:20.485051 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.90s 2026-03-23 01:33:20.485055 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.67s 2026-03-23 01:33:20.485059 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.48s 2026-03-23 01:33:20.485065 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.42s 2026-03-23 01:33:20.729620 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.41s 2026-03-23 01:33:20.730526 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.36s 2026-03-23 01:33:20.947401 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-03-23 01:33:20.952128 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-03-23 01:33:20.957610 | orchestrator | 2026-03-23 01:33:20.957678 | orchestrator | ## IDENTITY (API) 2026-03-23 01:33:20.957684 | orchestrator | 2026-03-23 01:33:20.957688 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-23 01:33:20.957693 | orchestrator | + echo 2026-03-23 01:33:20.957697 | orchestrator | + echo '## IDENTITY (API)' 2026-03-23 01:33:20.957701 | orchestrator | + echo 2026-03-23 01:33:20.957705 | orchestrator | + _tempest tempest.api.identity.v3 2026-03-23 01:33:20.957711 | orchestrator | + local regex=tempest.api.identity.v3 2026-03-23 01:33:20.958505 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-03-23 01:33:20.959439 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-23 01:33:20.961512 | orchestrator | + tee -a /opt/tempest/20260323-0133.log 2026-03-23 01:33:24.633630 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-23 01:33:24.633745 | orchestrator | Did you mean one of these? 2026-03-23 01:33:24.633761 | orchestrator | help 2026-03-23 01:33:24.633769 | orchestrator | init 2026-03-23 01:33:24.999202 | orchestrator | 2026-03-23 01:33:24.999295 | orchestrator | ## IMAGE (API) 2026-03-23 01:33:24.999312 | orchestrator | 2026-03-23 01:33:24.999325 | orchestrator | + echo 2026-03-23 01:33:24.999338 | orchestrator | + echo '## IMAGE (API)' 2026-03-23 01:33:24.999351 | orchestrator | + echo 2026-03-23 01:33:24.999404 | orchestrator | + _tempest tempest.api.image.v2 2026-03-23 01:33:24.999415 | orchestrator | + local regex=tempest.api.image.v2 2026-03-23 01:33:25.000333 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-03-23 01:33:25.000497 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-23 01:33:25.003466 | orchestrator | + tee -a /opt/tempest/20260323-0133.log 2026-03-23 01:33:28.645412 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-23 01:33:28.645504 | orchestrator | Did you mean one of these? 2026-03-23 01:33:28.645515 | orchestrator | help 2026-03-23 01:33:28.645522 | orchestrator | init 2026-03-23 01:33:29.043461 | orchestrator | 2026-03-23 01:33:29.043552 | orchestrator | ## NETWORK (API) 2026-03-23 01:33:29.043582 | orchestrator | 2026-03-23 01:33:29.043589 | orchestrator | + echo 2026-03-23 01:33:29.043594 | orchestrator | + echo '## NETWORK (API)' 2026-03-23 01:33:29.043600 | orchestrator | + echo 2026-03-23 01:33:29.043605 | orchestrator | + _tempest tempest.api.network 2026-03-23 01:33:29.043610 | orchestrator | + local regex=tempest.api.network 2026-03-23 01:33:29.044034 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-03-23 01:33:29.045293 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-23 01:33:29.048640 | orchestrator | + tee -a /opt/tempest/20260323-0133.log 2026-03-23 01:33:32.512609 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-23 01:33:32.512691 | orchestrator | Did you mean one of these? 2026-03-23 01:33:32.512709 | orchestrator | help 2026-03-23 01:33:32.512713 | orchestrator | init 2026-03-23 01:33:32.752017 | orchestrator | 2026-03-23 01:33:32.752085 | orchestrator | ## VOLUME (API) 2026-03-23 01:33:32.752101 | orchestrator | 2026-03-23 01:33:32.752108 | orchestrator | + echo 2026-03-23 01:33:32.752114 | orchestrator | + echo '## VOLUME (API)' 2026-03-23 01:33:32.752121 | orchestrator | + echo 2026-03-23 01:33:32.752127 | orchestrator | + _tempest tempest.api.volume 2026-03-23 01:33:32.752134 | orchestrator | + local regex=tempest.api.volume 2026-03-23 01:33:32.754311 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-03-23 01:33:32.754686 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-23 01:33:32.758210 | orchestrator | + tee -a /opt/tempest/20260323-0133.log 2026-03-23 01:33:35.936689 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-23 01:33:35.936764 | orchestrator | Did you mean one of these? 2026-03-23 01:33:35.936772 | orchestrator | help 2026-03-23 01:33:35.936777 | orchestrator | init 2026-03-23 01:33:36.193210 | orchestrator | 2026-03-23 01:33:36.193303 | orchestrator | ## COMPUTE (API) 2026-03-23 01:33:36.193352 | orchestrator | 2026-03-23 01:33:36.193360 | orchestrator | + echo 2026-03-23 01:33:36.193367 | orchestrator | + echo '## COMPUTE (API)' 2026-03-23 01:33:36.193439 | orchestrator | + echo 2026-03-23 01:33:36.193445 | orchestrator | + _tempest tempest.api.compute 2026-03-23 01:33:36.193470 | orchestrator | + local regex=tempest.api.compute 2026-03-23 01:33:36.193628 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-03-23 01:33:36.195548 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-23 01:33:36.199878 | orchestrator | + tee -a /opt/tempest/20260323-0133.log 2026-03-23 01:33:39.605975 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-23 01:33:39.606126 | orchestrator | Did you mean one of these? 2026-03-23 01:33:39.606137 | orchestrator | help 2026-03-23 01:33:39.606144 | orchestrator | init 2026-03-23 01:33:39.964611 | orchestrator | 2026-03-23 01:33:39.964706 | orchestrator | ## DNS (API) 2026-03-23 01:33:39.964718 | orchestrator | 2026-03-23 01:33:39.964725 | orchestrator | + echo 2026-03-23 01:33:39.964731 | orchestrator | + echo '## DNS (API)' 2026-03-23 01:33:39.964739 | orchestrator | + echo 2026-03-23 01:33:39.964746 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-03-23 01:33:39.964754 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-03-23 01:33:39.965143 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-03-23 01:33:39.966245 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-23 01:33:39.970410 | orchestrator | + tee -a /opt/tempest/20260323-0133.log 2026-03-23 01:33:43.450267 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-23 01:33:43.450357 | orchestrator | Did you mean one of these? 2026-03-23 01:33:43.450369 | orchestrator | help 2026-03-23 01:33:43.450390 | orchestrator | init 2026-03-23 01:33:43.730671 | orchestrator | 2026-03-23 01:33:43.730749 | orchestrator | ## OBJECT-STORE (API) 2026-03-23 01:33:43.730759 | orchestrator | 2026-03-23 01:33:43.730766 | orchestrator | + echo 2026-03-23 01:33:43.730772 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-03-23 01:33:43.730779 | orchestrator | + echo 2026-03-23 01:33:43.730785 | orchestrator | + _tempest tempest.api.object_storage 2026-03-23 01:33:43.730793 | orchestrator | + local regex=tempest.api.object_storage 2026-03-23 01:33:43.731459 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-23 01:33:43.731504 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-03-23 01:33:43.734717 | orchestrator | + tee -a /opt/tempest/20260323-0133.log 2026-03-23 01:33:46.936734 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-23 01:33:46.936803 | orchestrator | Did you mean one of these? 2026-03-23 01:33:46.936811 | orchestrator | help 2026-03-23 01:33:46.936816 | orchestrator | init 2026-03-23 01:33:47.398573 | orchestrator | ok: Runtime: 0:01:43.553729 2026-03-23 01:33:47.418470 | 2026-03-23 01:33:47.418626 | TASK [Check prometheus alert status] 2026-03-23 01:33:47.951920 | orchestrator | skipping: Conditional result was False 2026-03-23 01:33:47.955383 | 2026-03-23 01:33:47.955543 | PLAY RECAP 2026-03-23 01:33:47.955671 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-03-23 01:33:47.955733 | 2026-03-23 01:33:48.193024 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-23 01:33:48.195677 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-23 01:33:49.038715 | 2026-03-23 01:33:49.039015 | PLAY [Post output play] 2026-03-23 01:33:49.069577 | 2026-03-23 01:33:49.069777 | LOOP [stage-output : Register sources] 2026-03-23 01:33:49.143481 | 2026-03-23 01:33:49.144012 | TASK [stage-output : Check sudo] 2026-03-23 01:33:50.035326 | orchestrator | sudo: a password is required 2026-03-23 01:33:50.184259 | orchestrator | ok: Runtime: 0:00:00.013297 2026-03-23 01:33:50.199515 | 2026-03-23 01:33:50.199675 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-23 01:33:50.240419 | 2026-03-23 01:33:50.240785 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-23 01:33:50.309626 | orchestrator | ok 2026-03-23 01:33:50.319032 | 2026-03-23 01:33:50.319181 | LOOP [stage-output : Ensure target folders exist] 2026-03-23 01:33:50.853569 | orchestrator | ok: "docs" 2026-03-23 01:33:50.853950 | 2026-03-23 01:33:51.142139 | orchestrator | ok: "artifacts" 2026-03-23 01:33:51.439371 | orchestrator | ok: "logs" 2026-03-23 01:33:51.454908 | 2026-03-23 01:33:51.455050 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-23 01:33:51.491839 | 2026-03-23 01:33:51.492152 | TASK [stage-output : Make all log files readable] 2026-03-23 01:33:51.825177 | orchestrator | ok 2026-03-23 01:33:51.834659 | 2026-03-23 01:33:51.834791 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-23 01:33:51.870853 | orchestrator | skipping: Conditional result was False 2026-03-23 01:33:51.882075 | 2026-03-23 01:33:51.882258 | TASK [stage-output : Discover log files for compression] 2026-03-23 01:33:51.907128 | orchestrator | skipping: Conditional result was False 2026-03-23 01:33:51.917996 | 2026-03-23 01:33:51.918168 | LOOP [stage-output : Archive everything from logs] 2026-03-23 01:33:51.959663 | 2026-03-23 01:33:51.959835 | PLAY [Post cleanup play] 2026-03-23 01:33:51.969616 | 2026-03-23 01:33:51.969734 | TASK [Set cloud fact (Zuul deployment)] 2026-03-23 01:33:52.038004 | orchestrator | ok 2026-03-23 01:33:52.049471 | 2026-03-23 01:33:52.049595 | TASK [Set cloud fact (local deployment)] 2026-03-23 01:33:52.085498 | orchestrator | skipping: Conditional result was False 2026-03-23 01:33:52.102521 | 2026-03-23 01:33:52.102679 | TASK [Clean the cloud environment] 2026-03-23 01:33:52.816563 | orchestrator | 2026-03-23 01:33:52 - clean up servers 2026-03-23 01:33:53.556460 | orchestrator | 2026-03-23 01:33:53 - testbed-manager 2026-03-23 01:33:53.634679 | orchestrator | 2026-03-23 01:33:53 - testbed-node-0 2026-03-23 01:33:53.719886 | orchestrator | 2026-03-23 01:33:53 - testbed-node-3 2026-03-23 01:33:53.808082 | orchestrator | 2026-03-23 01:33:53 - testbed-node-1 2026-03-23 01:33:53.886797 | orchestrator | 2026-03-23 01:33:53 - testbed-node-4 2026-03-23 01:33:53.984157 | orchestrator | 2026-03-23 01:33:53 - testbed-node-5 2026-03-23 01:33:54.074338 | orchestrator | 2026-03-23 01:33:54 - testbed-node-2 2026-03-23 01:33:54.158948 | orchestrator | 2026-03-23 01:33:54 - clean up keypairs 2026-03-23 01:33:54.174191 | orchestrator | 2026-03-23 01:33:54 - testbed 2026-03-23 01:33:54.194807 | orchestrator | 2026-03-23 01:33:54 - wait for servers to be gone 2026-03-23 01:34:05.139439 | orchestrator | 2026-03-23 01:34:05 - clean up ports 2026-03-23 01:34:05.311719 | orchestrator | 2026-03-23 01:34:05 - 27b1cd0a-838d-45d2-8f8e-246a058d7155 2026-03-23 01:34:05.589660 | orchestrator | 2026-03-23 01:34:05 - 372052dd-c7b8-4450-b257-6163748ede4a 2026-03-23 01:34:06.080060 | orchestrator | 2026-03-23 01:34:06 - 4553563c-ee35-47cc-9928-70ac9c97e77c 2026-03-23 01:34:06.357699 | orchestrator | 2026-03-23 01:34:06 - 67ace1a1-530a-4ebf-b0e8-a0543aec58a8 2026-03-23 01:34:06.574460 | orchestrator | 2026-03-23 01:34:06 - 8c073c5a-6d00-45f5-b6e2-c206958d0b73 2026-03-23 01:34:06.778973 | orchestrator | 2026-03-23 01:34:06 - b0ee0518-cf10-49a0-b487-7bb0210b923b 2026-03-23 01:34:06.988897 | orchestrator | 2026-03-23 01:34:06 - f28fa15b-193f-453c-9370-b85a84f45546 2026-03-23 01:34:07.201549 | orchestrator | 2026-03-23 01:34:07 - clean up volumes 2026-03-23 01:34:07.370247 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-1-node-base 2026-03-23 01:34:07.411789 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-2-node-base 2026-03-23 01:34:07.448191 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-0-node-base 2026-03-23 01:34:07.488021 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-3-node-base 2026-03-23 01:34:07.526319 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-5-node-base 2026-03-23 01:34:07.570833 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-4-node-base 2026-03-23 01:34:07.613983 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-manager-base 2026-03-23 01:34:07.657516 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-2-node-5 2026-03-23 01:34:07.697485 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-0-node-3 2026-03-23 01:34:07.740168 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-8-node-5 2026-03-23 01:34:07.787887 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-1-node-4 2026-03-23 01:34:07.828313 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-6-node-3 2026-03-23 01:34:07.875597 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-5-node-5 2026-03-23 01:34:07.918747 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-7-node-4 2026-03-23 01:34:07.958454 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-3-node-3 2026-03-23 01:34:07.997556 | orchestrator | 2026-03-23 01:34:07 - testbed-volume-4-node-4 2026-03-23 01:34:08.043302 | orchestrator | 2026-03-23 01:34:08 - disconnect routers 2026-03-23 01:34:08.160292 | orchestrator | 2026-03-23 01:34:08 - testbed 2026-03-23 01:34:09.177691 | orchestrator | 2026-03-23 01:34:09 - clean up subnets 2026-03-23 01:34:09.235813 | orchestrator | 2026-03-23 01:34:09 - subnet-testbed-management 2026-03-23 01:34:09.412176 | orchestrator | 2026-03-23 01:34:09 - clean up networks 2026-03-23 01:34:09.621928 | orchestrator | 2026-03-23 01:34:09 - net-testbed-management 2026-03-23 01:34:09.914112 | orchestrator | 2026-03-23 01:34:09 - clean up security groups 2026-03-23 01:34:09.962067 | orchestrator | 2026-03-23 01:34:09 - testbed-node 2026-03-23 01:34:10.073565 | orchestrator | 2026-03-23 01:34:10 - testbed-management 2026-03-23 01:34:10.638152 | orchestrator | 2026-03-23 01:34:10 - clean up floating ips 2026-03-23 01:34:10.676593 | orchestrator | 2026-03-23 01:34:10 - 81.163.192.169 2026-03-23 01:34:11.034285 | orchestrator | 2026-03-23 01:34:11 - clean up routers 2026-03-23 01:34:11.156370 | orchestrator | 2026-03-23 01:34:11 - testbed 2026-03-23 01:34:12.188053 | orchestrator | ok: Runtime: 0:00:19.643259 2026-03-23 01:34:12.192328 | 2026-03-23 01:34:12.192511 | PLAY RECAP 2026-03-23 01:34:12.192659 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-23 01:34:12.192753 | 2026-03-23 01:34:12.331509 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-23 01:34:12.332616 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-23 01:34:13.046654 | 2026-03-23 01:34:13.046823 | PLAY [Cleanup play] 2026-03-23 01:34:13.063856 | 2026-03-23 01:34:13.064003 | TASK [Set cloud fact (Zuul deployment)] 2026-03-23 01:34:13.119373 | orchestrator | ok 2026-03-23 01:34:13.127750 | 2026-03-23 01:34:13.127885 | TASK [Set cloud fact (local deployment)] 2026-03-23 01:34:13.162441 | orchestrator | skipping: Conditional result was False 2026-03-23 01:34:13.179830 | 2026-03-23 01:34:13.179978 | TASK [Clean the cloud environment] 2026-03-23 01:34:14.390802 | orchestrator | 2026-03-23 01:34:14 - clean up servers 2026-03-23 01:34:14.878829 | orchestrator | 2026-03-23 01:34:14 - clean up keypairs 2026-03-23 01:34:14.892964 | orchestrator | 2026-03-23 01:34:14 - wait for servers to be gone 2026-03-23 01:34:14.934920 | orchestrator | 2026-03-23 01:34:14 - clean up ports 2026-03-23 01:34:15.045017 | orchestrator | 2026-03-23 01:34:15 - clean up volumes 2026-03-23 01:34:15.131016 | orchestrator | 2026-03-23 01:34:15 - disconnect routers 2026-03-23 01:34:15.160976 | orchestrator | 2026-03-23 01:34:15 - clean up subnets 2026-03-23 01:34:15.182654 | orchestrator | 2026-03-23 01:34:15 - clean up networks 2026-03-23 01:34:15.337552 | orchestrator | 2026-03-23 01:34:15 - clean up security groups 2026-03-23 01:34:15.372363 | orchestrator | 2026-03-23 01:34:15 - clean up floating ips 2026-03-23 01:34:15.396525 | orchestrator | 2026-03-23 01:34:15 - clean up routers 2026-03-23 01:34:15.718571 | orchestrator | ok: Runtime: 0:00:01.486953 2026-03-23 01:34:15.722667 | 2026-03-23 01:34:15.722890 | PLAY RECAP 2026-03-23 01:34:15.723023 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-23 01:34:15.723113 | 2026-03-23 01:34:15.852247 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-23 01:34:15.855127 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-23 01:34:16.595740 | 2026-03-23 01:34:16.595906 | PLAY [Base post-fetch] 2026-03-23 01:34:16.611547 | 2026-03-23 01:34:16.611694 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-23 01:34:16.667590 | orchestrator | skipping: Conditional result was False 2026-03-23 01:34:16.682434 | 2026-03-23 01:34:16.682645 | TASK [fetch-output : Set log path for single node] 2026-03-23 01:34:16.723421 | orchestrator | ok 2026-03-23 01:34:16.729811 | 2026-03-23 01:34:16.729930 | LOOP [fetch-output : Ensure local output dirs] 2026-03-23 01:34:17.233561 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/c4227f186e3748eea4a17831dc9e109f/work/logs" 2026-03-23 01:34:17.526684 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c4227f186e3748eea4a17831dc9e109f/work/artifacts" 2026-03-23 01:34:17.802787 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c4227f186e3748eea4a17831dc9e109f/work/docs" 2026-03-23 01:34:17.821506 | 2026-03-23 01:34:17.821662 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-23 01:34:18.740919 | orchestrator | changed: .d..t...... ./ 2026-03-23 01:34:18.741284 | orchestrator | changed: All items complete 2026-03-23 01:34:18.741348 | 2026-03-23 01:34:19.463677 | orchestrator | changed: .d..t...... ./ 2026-03-23 01:34:20.183111 | orchestrator | changed: .d..t...... ./ 2026-03-23 01:34:20.206186 | 2026-03-23 01:34:20.206319 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-23 01:34:20.237565 | orchestrator | skipping: Conditional result was False 2026-03-23 01:34:20.240822 | orchestrator | skipping: Conditional result was False 2026-03-23 01:34:20.253795 | 2026-03-23 01:34:20.253892 | PLAY RECAP 2026-03-23 01:34:20.253961 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-23 01:34:20.254013 | 2026-03-23 01:34:20.378443 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-23 01:34:20.381191 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-23 01:34:21.149548 | 2026-03-23 01:34:21.149705 | PLAY [Base post] 2026-03-23 01:34:21.164402 | 2026-03-23 01:34:21.164534 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-23 01:34:22.200617 | orchestrator | changed 2026-03-23 01:34:22.210731 | 2026-03-23 01:34:22.210951 | PLAY RECAP 2026-03-23 01:34:22.211037 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-23 01:34:22.211160 | 2026-03-23 01:34:22.337940 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-23 01:34:22.339117 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-23 01:34:23.140746 | 2026-03-23 01:34:23.140915 | PLAY [Base post-logs] 2026-03-23 01:34:23.151888 | 2026-03-23 01:34:23.152023 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-23 01:34:23.614471 | localhost | changed 2026-03-23 01:34:23.624427 | 2026-03-23 01:34:23.624572 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-23 01:34:23.660328 | localhost | ok 2026-03-23 01:34:23.664166 | 2026-03-23 01:34:23.664280 | TASK [Set zuul-log-path fact] 2026-03-23 01:34:23.679850 | localhost | ok 2026-03-23 01:34:23.689146 | 2026-03-23 01:34:23.689257 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-23 01:34:23.714876 | localhost | ok 2026-03-23 01:34:23.720589 | 2026-03-23 01:34:23.720742 | TASK [upload-logs : Create log directories] 2026-03-23 01:34:24.229329 | localhost | changed 2026-03-23 01:34:24.234305 | 2026-03-23 01:34:24.234486 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-23 01:34:24.737756 | localhost -> localhost | ok: Runtime: 0:00:00.007696 2026-03-23 01:34:24.747323 | 2026-03-23 01:34:24.747524 | TASK [upload-logs : Upload logs to log server] 2026-03-23 01:34:25.314351 | localhost | Output suppressed because no_log was given 2026-03-23 01:34:25.316533 | 2026-03-23 01:34:25.316647 | LOOP [upload-logs : Compress console log and json output] 2026-03-23 01:34:25.383232 | localhost | skipping: Conditional result was False 2026-03-23 01:34:25.394314 | localhost | skipping: Conditional result was False 2026-03-23 01:34:25.407789 | 2026-03-23 01:34:25.407912 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-23 01:34:25.467870 | localhost | skipping: Conditional result was False 2026-03-23 01:34:25.468689 | 2026-03-23 01:34:25.471800 | localhost | skipping: Conditional result was False 2026-03-23 01:34:25.476555 | 2026-03-23 01:34:25.476668 | LOOP [upload-logs : Upload console log and json output]